Search Results

Search found 104718 results on 4189 pages for 'sql server 11'.

Page 318/4189 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • SSIS Prehistory video

    - by jamiet
    I’m currently wasting spending my Easter bank holiday putting together my presentation SSIS Dataflow Performance Tuning for the upcoming SQL Bits conference in London and in doing so I’m researching some old material about how the dataflow actually works. Boring as it is I’ve gotten easily sidelined and have chanced upon an old video on Channel 9 entitled Euan Garden - Tour of SQL Server Team (part I). Euan is a former member of the SQL Server team and in this series of videos he walks the halls of the SQL Server building on Microsoft’s Redmond campus talking to some of the various protagonists and in this one he happens upon the SQL Server Integration Services team. The video was shot in 2004 so this is a fascinating (to me anyway) glimpse into the development of SSIS from before it was ever shipped and if you’re a geek like me you’ll really enjoy this behind-the-scenes look into how and why the product was architected. The video is also notable for the presence of the cameraman – none other than the now-rather-more-famous-than-he-was-then Robert Scoble. See it at http://channel9.msdn.com/posts/TheChannel9Team/Euan-Garden-Tour-of-SQL-Server-Team-part-I/ Enjoy! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Remote desktop to dedicated server running CentOS 5

    - by Saif Bechan
    I have a dedicated server running CentOS. I was wondering if it is possible to remote desktop to this server. If this is possible can someone explain to me how this works. I have downloaded a VMware image of CentSO, and i can run it with the VMware player, but I don't know what to do next. If this is possible can someone guide me in the right direction.

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • Packet dropped even when firewall is turned off in windows server 2008

    - by LightX
    We have a windows 2008 server and lately we have started seeing a lot of 5152 Events logged in the server (Windows Filtering Platform blocked a packet). We have an inbound rule configured to allow connections to the port which was working fine earlier. I'm not sure what changed lately. But this doesn't make any sense. The packet is dropped even when windows firewall is disabled. What am I missing?

    Read the article

  • Scripting a Windows 2008 Cluster from Windows 2003

    - by glancep
    Our current environment is all Windows 2003. When we migrate a new version of our service to the cluster, we first stop the service with a command like: cluster.exe <clusterName> resource "<serviceName>" /offline We do similarly after the migrate to bring the service back online. Now, we are upgrading our environment to new Windows 2008 servers. However, our build/migrate machine will remain Windows 2003. When issuing the same command from Windwos 2003 to Windows 2008, we get: System error 1722 has occurred (0x000006ba). The RPC server is unavailable. We need to be able to remotely administer a Windows 2008 cluster from a Windows 2003 server in an automated fashion (such as the command-line cluster.exe utility). Is this possible? Thanks, Gideon

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Viewing auto-created printers on a 2008 R2 Remote Desktop Services server

    - by LukeR
    On our 2003 Terminal Servers I am able to view any auto-created printers for users connected to that server, however on a new 2008 R2 RDS server I can only view local printers and my own auto-created printer(s). I have local and domain admin privileges. Is there something I need to change to be able to view all client printers? Is it possible? I have had a look for permissions relating to this but couldn't really find much that looked relevant.

    Read the article

  • SQL Server on Linux

    - by TimothyAWiseman
    For a particular project that is coming up, I am trying to expand my knowledge of Linux, so I am going to set up a Linux system at home. Rather than dual booting, I am thinking about putting SQL Server on a Windows Virtual Machine with Linux as the host at least until this project is over when I will probably switch back to Linux. So, I have a couple of different, but interrelated questions: How well does this work? This is only a test machine at home, so I can easily accept a fair bit of degradation, but if it is going to be a horrible reduction in performance I will dual boot instead. Is there a particular virtual machine manager I should look at to go this route? Since this is my personal machine, price is an issue but I am quite happy to pay a reasonable amount. And finally, given the choice of VMM, is there a particular Linux Distro I should be looking at? [This has been cross posted at Ask.SqlServerCentral.com . I think it may be appropriate at both sites. ]

    Read the article

  • Windows server backup question

    - by serveradminguy
    Hi, Is it possible to make a backup of my Windows Server from the built-in Microsoft tool, but as long as my Hyper-V backups are stored safely and not backed up anywhere, I can still restore my Windows Server from the native backup and use the Hyper-v machines? So if I lost my C:\ and my VMs are stored remotely, I can restore from an earlier backup and use those VMs.

    Read the article

  • Vmware server - 3d graphics upgrade like WKS 7 and vmware player

    - by jimbo45
    Hi all Anybody have an idea as to when VMWARE SERVER will have an upgrade to support 3-d graphics etc. I have created some W7 VM's under WKS7 and an upgraded vmware tools -- the W7 VM's run full aero etc. I believe that vmware tools and graphics for VM's running under vmware server are at the WKS 6.5 level -- hence problems can occur. Any info on an upgrade. Cheers jimbo

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Eclipse has multiple issues after JRE-6 (OpenJDK) upgrade

    - by Eusebius
    I'm on 12.04 LTS, and trying to use Eclipse Indigo. This morning Ubuntu made me update the following packages: Preparing to replace icedtea-6-jre-cacao 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-cacao_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-cacao ... Preparing to replace openjdk-6-jre-lib 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-lib_6b24-1.11.4-1ubuntu0.12.04.1_all.deb) ... Unpacking replacement openjdk-6-jre-lib ... Preparing to replace icedtea-6-jre-jamvm 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-jamvm_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-jamvm ... Preparing to replace openjdk-6-jre-headless 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-headless_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre-headless ... Preparing to replace openjdk-6-jre 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre ... After that (but I cannot swear it is the root cause), I have the following issues in Eclipse: When trying to launch the simplest HelloWorld program (which behaves fine with manual javac/java), I get either nothing or: An internal error occurred during: "Launching HelloWorld". org/eclipse/jdt/debug/core/JDIDebugModel I get an "Error log" tab in the console panel, with an error: Could not create the view: An unexpected exception was thrown. (Follows a consequent NullPointerException stacktrace between sun.util.calendar.ZoneInfoFile.getZoneIDs(ZoneInfoFile.java:785) and org.eclipse.equinox.launcher.Main.main(Main.java:1386)) When trying to access the Installed JREs part of the preferences, I get a popup saying: Unable to create the selected preference page. An error occurred while automatically activating bundle org.eclipse.jdt.debug.ui (162). And the preference tab says An error has occurred when creating this preference page. Until today I had a manually installed Eclipse (one of the official bundles available on their site), I've tried to replace it by the repository version and I get the same errors. What should I do to make Eclipse work again? Another person reports: Same happened to me after updating last night. Already tried reinstalling Eclipse and Java, starting Eclipse with -clean and starting new workspace and new .eclipse dir, but nothing helps.

    Read the article

  • Win 2003 Terminal Server, Logoff Domain Users after 30 mins

    - by Kabir Rao
    We have a terminal server in our environment, where Domain\Domain Users can login through Remote Desktop. We want to force log off these users after half an hour. However Administrators (this includes Domain\Domain Admin also) should not be affected by this. They should be able to connect to terminal server with out any interruption. Can somebody guide us about it. Thanks upfront Kabir

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • How to stop a random ramp in FCGI Processes Killing the server

    - by Andy Main
    So got the below earlier to day... Around that time the logs show a ramp in processes(600) and associated memory (1.2g), cpu usage load average (80) untill the server gave out. Server had to be hard reset by host as there was no ssh or plesk panel access. Fast CGI is configured as below and is setup for one high use site. As I understand it FcgidMaxProcesses 20 should protect against what happen but has not. I've read many forums with differing answers and references to many different fcgi directives, but have found nothing conclusive. Any one got some definitive answers on how to stop this sort of server process ramping and subsequent server failure? If you need more info let me know. Cheers Andy  /var/log/apache2/error_log [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17650 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17649 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17644 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17643 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17638 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17633 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17627 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17622 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17674 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17673 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17672 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17667 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17666 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17665 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17664 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17659 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17658 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17657 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17656 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VRmFLWEZfR2VBb2M https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VWTcwZEhoV2Fqejg https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VUUtVWWFINHZjZ0U https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VZEVMclh6ZUdaOUE <IfModule mod_fcgid.c> <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl </IfModule> FcgidIPCDir /var/lib/apache2/fcgid/sock FcgidProcessTableFile /var/lib/apache2/fcgid/shm FcgidIdleTimeout 40 FcgidProcessLifeTime 30 FcgidMaxProcesses 20 FcgidMaxProcessesPerClass 20 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 30 FcgidIOTimeout 120 FcgidInitialEnv RAILS_ENV production FcgidIdleScanInterval 10 FcgidMaxRequestLen 1073741824 </IfModule>

    Read the article

  • windows 2003 DNS server and DNS SEC

    - by pQd
    hi, i have almost out-of-the-box windows 2003 server which is also domain name server fro some users. should i be worried of 5th of may's deployment of dnssec on root name servers ? i have already run: dnscmd /Config /EnableEDnsProbes 1 thanks a lot!

    Read the article

  • DHCP and DNS on none AD 2003 Server PTR is updating but no A records

    - by user29819
    I have a strange issue, I have a DHCP and DNS server running in a non AD environment, on Windows 2003 server. I setup DHCP to update DNS A and PTR records even if the client doesnt request it, but I only see PTR records updated, the A records are not created at all. The domain is "local" forward zone is called "local" and in the option 15 set to "local" (actual name) the PTR records are created with the right name (example: win64_ent.local), What am I missing here ?

    Read the article

  • Smss.exe - setting any core affinity breaks rdp on Windows 7 / Windows Server 2012

    - by Hetman
    I have tried to set core affinity of smss.exe to not run on one critical core on Windows 7 and Windows Server 2008r2. It turns out that simply setting the core affinity to anything (even the full mask that smss.exe already has) seems to work but prevents users from rdp'ing into the machine until it is restarted. The users already logged in may continue to use their sessions. This behaviour does not occur on Windows 8/Windows Server 2012. Does anyone know why it is happening?

    Read the article

  • Prevent server from accessing network share

    - by Daveo
    I have built a test environment I want to isolate from my production servers. As part of this I want to prevent my new server from accessing file/network shares on production. For example prevent it from accessing \\MyProdServer I updated my host file with 0.0.0.0 MyProdServer However this doesn’t work I can still access the files. I do not have firewall available nor do I want to edit my production server in anyway.

    Read the article

  • Writing a SQL Azure Book - Notes

    - by Herve Roggero
    Over the last few months I have had the opportunity to ramp up significantly on SQL Azure.  In fact I will be the co-author of Pro SQL Azure, published by Apress. This is going to be a book on how to best leverage SQL Azure, both from a technology and design standpoint. Talking about design, one of the things I realized is that understanding the key limitations and boundary parameters of Azure in general, and more specifically SQL Azure, will play an important role in making sounds design decisions that both meet reasonable performance requirements and minimize the costs associated with running a cloud computing solution.   The book touches on many design considerations including link encryption, pricing model, design patterns, and also some important performance techniques that need to be leveraged when developing in Azure, including Caching, Lazy Properties and more.   Finally I started working with Shards and how to implement them in Azure to ensure database scalability beyond the current size limitations. Implementing shards is not simple, and the book will address how to create a shard technology within your code to provide a scale-out mechanism for your SQL Azure databases.   As you can see, there are many factors to consider when designing a SQL Azure database. While we can think of SQL Azure as a cloud version of SQL Server, it is best to look at it as a new platform to make sure you don’t make any assumptions on how to best leverage it.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >