Tuesday 15 June 2010

asp.net mvc - Automating Azure VIP Swap -


I have the ASP.NET MVC 4 app that is hosted as a blue web role. I want to do something that seems like it should be very standard: I want to create a function that I swap VIP during the VIP swap operation and can start an event (or callback) .

To add some context to just the situation: My website applies a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, first to make it available to all existing users to complete the workflow first (this is to write a very few "backward compatibility" code), so that no need to deal with the new code Ho. Data created by the previous version of the code, so a management function on my website first hits a value in the database which disables new workflows; It will wait till all the current workflow is completed; This will call the "VIP swap" routine; Finally, when the VIP swap routine indicates its completion, it will suppress the database value to re-enable new workflows.

I have received Microsoft documentation on how to program a VIP swap:

This process involves posting a magic URL and some headers are included in the post, then From time to time, a GET is examining a magic URL and response code.

As much as I think about it, the more non-trivial it seems, in addition to the basic timepieces of a background timer and complete notification, I do not know what the complications are, if any, So I can try to do this stuff in the IIS environment. Can I do the HTMH operation on the background thread? For that matter, will I run into complications, which is cooked in half a dozen or any different "working in the background" NAT?

Any help or guidance will be greatly appreciated. Specifically, I'm excited if someone can tell me in the ready-made implementation of this function!

I do not think you will find an easy solution because the clothing controller Ready to do some great work without running an hour-long workflow in the cloud computing environment, where an example can be pulled from under you (called for cleaning up onstoping with maximum 5 minutes Going) is that you hear it Other work to Shcit that was completed all your tasks.

The simple question is, "If the workflows are still going on, then an example goes down?" Do you restart them or do they lose? If they are lost then you do not care in any way, so killing the workflow for the upgrade is equally unimportant. If you start them again, use a single mechanism for determining that a node is closed and distribute the work accordingly. This pattern is eerily similar in form. Do not run the workflow on just any 'OL example' Submit them for a job tracker service that decides what to do. (Job Tracker) service can then use the Service Management API so that you can increase the number of versions, as you want the version to run, run a workflow on the appropriate node, and turn them off when they are now Not necessarily or old.

Unfortunately, this is not a simple solution for you, but instead of trying to fit the loop with your current perspective, some changes need to be made in your architecture. Reduce your workloads, create loose coupling services, design for failure, and some other cloud / distributed computing practices should be considered. One reason that Hadoop is built in the way is that ??? And in this there is a reputation for being able to work on some unreliable items on a bunch of hardware.

No comments:

Post a Comment