Now after have been cloning a bit more and even a lot more when trying to see if the new configuration for cloning is working which is a pretty good work speed since your clone isn't closer than 4 hours away =)
First let's just talk about how the cloning works in a light version:
- You request the cloning with the System clone->Request Clone. Remember that the data you are cloning are from last night backup. So if you just changed something, you have to set the clone date for tomorrow instead if you want your last changes to be cloned.
- Now it looks at the "Preserve data" table and see what data on the target it should save. The data is stored somewhere else(magically) and will be restored after the clone
- Copies over everything to the target. If the table is in the "exclude tables" it only copies over an empty table.
- Then it copies over the "preserved data" that was stored before the copy.
- Runs the post-clone cleanup scripts.
Now to save the data on the target. There is pretty much only one way to do it and that nice that sometimes there isn't 10 different ways to go to reach the same goal.
System Clone->Preserve Data. Here you specify what you want to save on the target. You can take a whole table or you can set conditions like everywhere else in ServiceNow to just get a few records/properties from a table.
What took some some clones to figure out was how to save the data on the target but not copy the data from production. In our case it was the "update sources". On our production the source is our test environment and on test its our dev environment. Pretty simple, but when we clone production to test, suddenly our test environment had itself as update source and that felt a bit weird...
Problem was that it wasn't enough to put the table Remote Instances(sys_update_set_source) in the preserve data. Then it would keep the correct one, but it would still copy over the source from production. Leading to that we suddenly had two sources (dev & test) on our test environment.
To make this happen we had to also add the table([sys_update_set_source]) to the exclude tables. Then it would keep the one on test and only copy over an empty table.
This is what else we added for now:
* It might not be big, but it annoys me very much and that is the namn on browser tab. So we added this property(glide.product.name) in the preserve table so it keep that value.
* Bookmarks is also another thing you probably don't want to come over with a clone. so added the sys_ui_bookmark) to the preserve data as well.
* Since we use an LDAP-sync for our users, we didn't was the settings or schedule to be clone either we kept that away from the clone as well.
It's not much extra work to take away a bit of reconfigure each time you clone, but it's nice to do other things instead of doing the same things every time you clone.
Last thing that we learn the hard way about cloning.
And this is because default the "exclude audit and log data" isn't checked by default when you request a clone.
"Exclude audit and log data" means that audit data(which consists of the update times for the journal entries and other changes to the record) is not moved over to the target instance. That is the reason why you will see the updates and entries grouped into one big activity.
Default the workflows doesn't come over with the clone. So don't use old "cloned" records when you try to look why your workflow isn't working. Or you wondering why the "show workflow" link doesn't show up on all changes. It only shows on the ones created on the instance since the cloned ones don't have a workflow
* Activity log is messed up. The time stamps doesn't tag along with the clone. Which mean that the incident is correctly updated but all the updates are in 1 log entry on the target, not split up like it is on the source. The first time we noticed this was when we upgraded from Eureka to Fuji and it took quite a while before we understood that this wasn't because of the upgrade to fuji and indeed worked like intended..
The both the log and workflow can be solved. This occurs since the "exclude audit and log data" is checked by default and that means that the tables in the "exclude tables" isn't tagging along in the clone. By default the tables handling the workflow and log structure are already in the exclude tables. So unchecking the "Exclude audit and log data" will make both the workflow and activity log look normal on the cloned records. But there is a reason why it is checked default, so just use it in specific cases.
//Göran
I did my first clone yesterday and ran into the exact same problem with update sets. I am trying again but this time excluding and perserving data on these tables:
SvaraRaderasys_update_set_source
sys_remote_update_set
sys_update_set_log
I also ran into issues with the SAML SSO Certificate. So I am also excluding/perseving this table as well:
sys_certificate
Hope it went better for you this time. cloning can be a real menace until you get the hang of it and what you really want to stay and not...
SvaraRadera//Göran
Hi Goran, great post. So how do you setup a clone so that your active contexts from the source instance, are active on the target instance? uncheck "exclude audit & log data?
SvaraRaderaHi,
SvaraRaderaYes, by unchecking that you will also clone over logs, Workflows etc. I would say that that I experienced that if you unext that, it will not mind all the tables you have in exclude tables either.. so it's more a nothing or everything. But I haven't verified it, just a feeling I got.
If you got more questions, please feel free to connect on linkedin or at the community. I don't visit this blog so much more now that I writing for ServiceNow.
ATF - automated test framework. I do all my testing in dev (or test) only and not in prod. So cloning from Prod to Dev I need all my data tests, results, data Preserved.
SvaraRaderaNeed specifics on how to do this.
Not sure if you still need the answer, but here is a good answer for that at the community:
SvaraRaderahttps://community.servicenow.com/message/1094227#1094227