@manish_kumar_1

There was one mistake in the country name of records where customer_name = Ayush. Instead of food_delivery_country it should be country. I have given the corrected code here. Please change the code accordingly.

old_records = joined_data.where(
        (col("food_delivery_address") != col("city")) & (col("active") == "Y"))\
        .withColumn("active", lit("N"))\
        .withColumn("effective_end_date", col("sales_date"))\
        .select(
            "customer_id",
            "customer_name",
            "city",
            "country",
            "active",
            "effective_start_date",
            "effective_end_date"
)

@Someonner

Must do question for experienced people. Very important

@raghavaraopothuri3262

Being a telugu speaker with limited hindi knowledge I am able to understand your videos and completed both theoretical and practical playlist. I will try to connect with you on topmate once i am ready for the interviews. thank you so much manish :)

@CctnsHelpdesk

Thank you Bhaiya, ajj mera practical v khatm ho gaya.... You are too good in explaining 
Joined a institute for azure data engineer but didnt get enough knowledge in databricks wise, 
the topic i get to know from you:
read mode failfast,permissive,dropmalformed
JSON (multi line,single line)
Corrupt file handling,
Parquet in details,
df write/save bucketBy and partitionBY,
lit(),
union and unional all
when otherwise
count() in tranform and action
left anti/left semi
window functions
SCD2

Fundamental:
Spark UI
Catalyst Optimizer/SPARK sql enginee
sort vs shuffle join
Spark Memory
Adaptive Query Execution
Salting

@Abhishek-Gupta

Finished the Practical Playlist, Few Vidoes Remaining on Theory one and lots of practice. easy to understand as worked but pandas and SQL before but yeah Great Work. Keep it up 💫💪

@user93-i2k

Thank you again for making this playlist, just wanted to request to please add Delta Live Table, Workflows and Unity Catalog, as these are very important and much needed topics. So please find some time for us to make videos on these topics also.
Thanks in adavnce..our Data Sir!

@nidhimodani7601

Thank you so much Manish!!
Aapke samjhane ka tareeka bohot sahi hai, just finished the playlist. Thank you for taking such efforts.
Looking forward for more content like this.

@biswajeetdhal5364

i have watched all your vidoes of spark it was really helpful and the way you explain all the concepts are really Amazing.
Thanks a lot

@navjotsingh-hl1jg

bhai loving your video today i have completed whole playlist of practical

@ShivamGupta-wn9mo

hey i completed whole series and practised every question and now i am confident in pyspark coding thanks.

@fashionate6527

aap bhut asan sabd m hindi m smjhate ho...uske liye bhut bhut Thank you

@vivekpuurkayastha1580

Hi manish ... great video ...  Eagerly waiting for the problems faced in Spark project video... Please make it next. Thank You.

@vaibhavambhore6305

Thank you so much for learning Spark in that simple way , I'm always grateful to you.

@rahulrathore2668

very nice and easily explained spark sir

@praneethakuna693

Thank you so much for the great content just finished the playlist .Very helpful for my upcoming interviews. :)

@AkshatGupta-ou7zz

why we didnt use surrogate keys here to implement scd 2?

@kv_data

13:52 (pls answer) this solution is when we want to truncate load )or overwrite),  there could be another solution using MERGE on Delta table, can we tell that solution in interview????

@MoinKhan-cg8cu

Nice demo, real time exp

@karandoke1134

great series bro nice work

@shivangigupta32

Can you tell the solution using sql query also for this problem?