Search Results

Search found 65464 results on 2619 pages for 'web based backup'.

Page 176/2619 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • Problems with USB-Devices using VDR

    - by emmsinator
    Hey Guys, I'm using VDR on vSphere4. It works sucessfully. I've already backuped several VMs with VDR and I like it very much. But now we got a problem. We have 2 VMs, using an USB-Device Server with a stick plugged in, which is definetely need by these 2 VMs for Licensing and so. Every time, I start the Backup process, the VMs lost the communication to the USB-Server and its stick after building the snapshot and while online. Because of that, the software on these VMs can't work correctly. I have to restart both Machines to solve this problem. These fact is bad for an automatic backup. Does VDR have a special function for those cases or is something like this already known? It would be no problem, to shutdown the servers for building snapshots on Saturday or Sunday. Can VDR initiate a shutdown before starting the backup process? Otherwise I must try to use scripts, but that wouldn't be so nice. Thanks a lot for your help.

    Read the article

  • VMWare Server modifying files related to paused VMs, is this expected?

    - by David Spillett
    While refreshing the backup of a VM used for testing, I experienced the following warning from tar: tar: /VMsR0/cli_noddyco_test/VM2K8_32_web.vmem: file changed as we read it The VMs in question were paused at the time. My first though was that I'd mixed up the machines and was trying to backup something that was still actively running. To be sure I unpaused and properly shut down the VM, and the vmem files that tar reported changing vanished as I would expect. Is it normal for VMWare Server to touch or alter files for paused VMs like this, or is there likely something amiss with our setup? If this is expected behaviour, is just touching the vmem file (and so altering the last modification date without actually changing content)? If it is normal for files relating to paused VMs to be updated I shall have to revise our backup procedures to make sure the VMs are fully shut down fully rather than just pausing them (this isn't a problem, but it seems strange and I'd prefer to understand what VMWare is doing and why instead of just dismissing it as "one of those things" and working around it). For further detail: the host in question is VMWare Server version 2.0.2 running on 64-bit Debian/Lenny, and that VM did not have a snapshots at the time. We have backed up paused VMs this way in the past with no such warnings from tar.

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • Unnamed, hidden partitions on my 500 GB HD, HP Pavilion dm4 Laptop

    - by emotionull
    I have multiple doubts here. Its a Seagate 500GB 7200RPM HD. I had installed it few months back after my original Laptop HD stopped working. The current drives on my latop, as shown by the Windows Disk Management are: After installing the new HD, I had done a complete clean install of Windows 7 and I didn't create any parition myself, manually. So there are 4 drives. Even previously, before I installed this new HD, my laptop had 4 Partitions. But the there were no un-named partitions like the two in this case. The other two were HP tools and Recovery or something. It was pre-configured, Factory installed Windows. Also, now when I right cick on the unnamed Drives from Disk Management, all the options are greyed out (see image) except the delete partition image. So how do I know what's inside those partitions? Will it be ok if I delete them? I want install Ubuntu and dual boot it with my current windows installation. I cannot do it in current setup as there are already 4 partitions of my HD and if I will try to make a new partition, it will be a logical one (correct me if I am wrong here). So can I delete the un-named, hidden partitions and use them for Ubuntu? A bit unrelated question. As a backup option, can I use the Windows 7's Backup and Restore facility to keep a complete backup of all the drivers and system softwares.

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How repair/uninstall Visual Web Developer 2010 Beta 2?

    - by Sprinter
    I cannot uninstall Visual Web Developer 2010 Beta 2. When I was trying to uninstall the first time, the power went off to my machine and screwed up the Beta 2 installation. I cannot find a Visual Web Developer 2010 Beta 2 download on the Microsoft website any longer to repair the Beta 2 installation. How can I get VWD 2010 Beta 2 off of my system so I can install the new release?

    Read the article

  • How can I keep a folder synchronized to an external USB hard drive in Ubuntu?

    - by Cesar
    I have a growing music collection which I manually keep in sync with an external USB drive. Sometimes I edit their ID3 tags, add or delete a file in either the hard drive or the USB drive, and I would like to keep those changes synchronized between both. Does Ubuntu has something available that would help me with this scenario? Preferably something easy to use with a UI. Update: To clarify my question, changes may happen on both the local hard drive or the USB drive, so the sync process must be on both directions.

    Read the article

  • HTML5 or Javascript game engine to develop a browser game

    - by Jack Duluoz
    I would like to start developing a MMO browser game, like Travian or Ogame, probably involving also a bit of more sophisticated graphical features such as players interacting in real time with a 2d map or something like that. My main doubt is what kind of development tools I should use: I've a good experience with PHP and MySQL for the server side and Javascript (and jQuery) regarding the client side. Coding everything from scratch would be of course really painful so I was wondering if I should use a javascript game engine or not. Are there (possibly free) game engine you would recommend? Are they good enough to develop a big game? Also, I saw a lot of HTML5 games popping up lately but I'm now sure if using HTML5 is a good idea or not. Would you recommend it? What are the pro and cons about using HTML5? If you'd recommend it, do you have any good links regarding game development with HTML5? (PS: I know that HTML5 and a Javascript engine are not mutually exclusive, I just didn't know how to formulate a proper title since English is not my main language. So, please, answer addressing HTML5 and a game engine pro and cons separately)

    Read the article

  • Organizing an entity system with external component managers?

    - by Gustav
    I'm designing a game engine for a top-down multiplayer 2D shooter game, which I want to be reasonably reuseable for other top-down shooter games. At the moment I'm thinking about how something like an entity system in it should be designed. First I thought about this: I have a class called EntityManager. It should implement a method called Update and another one called Draw. The reason for me separating Logic and Rendering is because then I can omit the Draw method if running a standalone server. EntityManager owns a list of objects of type BaseEntity. Each entity owns a list of components such as EntityModel (the drawable representation of an entity), EntityNetworkInterface, and EntityPhysicalBody. EntityManager also owns a list of component managers like EntityRenderManager, EntityNetworkManager and EntityPhysicsManager. Each component manager keeps references to the entity components. There are various reasons for moving this code out of the entity's own class and do it collectively instead. For example, I'm using an external physics library, Box2D, for the game. In Box2D, you first add the bodies and shapes to a world (owned by the EntityPhysicsManager in this case) and add collision callbacks (which would be dispatched to the entity object itself in my system). Then you run a function which simulates everything in the system. I find it hard to find a better solution to do this than doing it in an external component manager like this. Entity creation is done like this: EntityManager implements the method RegisterEntity(entityClass, factory) which registers how to create an entity if that class. It also implements the method CreateEntity(entityClass) which would return an object of type BaseEntity. Well now comes my problem: How would the reference to a component be registered to the component managers? I have no idea how I would reference the component managers from a factory/closure.

    Read the article

  • Speed, delta time and movement

    - by munchor
    player.vx = scroll_speed * dt /* Update positions */ player.x += player.vx player.y += player.vy I have a delta time in miliseconds, and I was wondering how I can use it properly. I tried the above, but that makes the player go fast when the computer is fast, and the player go slow when the computer is slow. The same thing happens with jumping. The player can jump really high when the computer is faster. This is sort of unfair, I think, because. Should I be doing this someway else? Thanks.

    Read the article

  • How can I store spell & items using a std::vector implementation?

    - by Vladimir Marenus
    I'm following along with a book from GameInstitute right now, and it's asking me to: Allow the player to buy and carry healing potions and potions of fireball. You can add an Item array (after you define the item class) to the Player class for storing them, or use a std::vector to store them. I think I would like to use the std::vector implementation, because that seems to confuse me less than making an item class, but I am unsure how to do so. I've heard from many people that vectors are great ways to store dynamic values (such as items, weapons, etc), but I've not seen it used.

    Read the article

  • Bit-Twiddling in SQL

    - by Mike C
    Someone posted a question to the SQL Server forum the other day asking how to count runs of zero bits in an integer using SQL. Basically the poster wanted to know how to efficiently determine the longest contiguous string of zero-bits (known as a run of bits) in any given 32-bit integer. Here are a couple of examples to demonstrate the idea: Decimal = Binary = Zero Run 999,999,999 decimal = 00 111011 1 00 11010 11 00 1 00 1 11111111 binary = 2 contiguous zero bits 666,666,666 decimal = 00100111 10111100...(read more)

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >