Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 519/1186 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • CSS vendor prefixes considered harmful

    I recently came across a post about border-radius by the IE team, that said IE9 supportsborder-radius (cool!) without vendor prefix (even cooler!)The post continues:While a number of web pages already make use of this feature, some [...] do not render properly in IE9 or Opera 10.50 because they lack an unprefixed declaration of the border-radius property.As the specification nears Recommendation and browser vendors are working on their final implementations and testcases for submission to the W3C,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ntfsresize volume and size information

    - by antonio
    I am going to resize my sda2 NTFS partition. When gathering info with ntfsresize, I get: ntfsresize --info /dev/sda2 ntfsresize v2013.1.13 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 21999993344 bytes (22000 MB) Current device size: 23622320128 bytes (23623 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 10673 MB (48.5%) Collecting resizing constraints ... You might resize at 10672590848 bytes or 10673 MB (freeing 11327 MB). Please make a test run using both the -n and -s options before real resizing! Can you tell me what is the difference between volume and device size? As for device size, 23622320128 bytes / 1000^2 = 23622.3 MB. Why is 23623 MB reported instead of 23622? Note that parted confirms this value: parted /dev/sda2 unit MB p Model: Unknown (unknown) Disk /dev/sda2: 23622MB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00MB 23622MB 23622MB ntfs

    Read the article

  • Languages/Methods to Learn for Scientific Computing?:

    - by Zéychin
    I'm a second-semester Junior working towards a Computer Science degree with a Scientific Computing concentration and a Mathematics degree with a concentration on Applied Discrete Mathematics. So, number crunching and such rather than a bunch of regular expressions, interface design, and networking. I've found that I'm not learning new relevant languages from my coursework and am interested in what the community would recommend me to learn. I know as far as programming methods go, I need to learn more about parallelizing programs, but if there's anything else you can recommend, I would appreciate it. Here's a list of the languages with which I am very experienced (web technologies omitted as they barely apply here). Any recommendations for additional languages I should learn would be very much appreciated!: Java C C++ Fortran77/90/95 Haskell Python MATLAB

    Read the article

  • Routing PHP memcached calls to Oracle Coherence

    - by cj
    A new post Getting Started with the Coherence Memcached Adaptor from David Felcey shows how PHP memcached calls can automatically be routed to store data in Oracle Coherence 12c. This is possible now Coherence 12.1.3 supports Memcached clients using the Binary Memcached protocol. David's post shows how the Coherence Memcached adaptor can be configured as a proxy service that runs in the Coherence cluster. There's nothing particular to configure in the PHP application, except to enable memcached.use_sasl = 1 So what is Coherence? It is an "in-memory data grid solution", with a number of advanced features. You can read more in the Oracle Coherence 12C Data Sheet.

    Read the article

  • Board Game Design in Cocos2d

    - by object2.0
    Hi folks i am going to start a chess like board game. and for that i have reviewed a number to things available. one is http://www.mapeditor.org/ , using which you can create a grid base games. another option is geekgameboard for iphone available at http://mooseyard.lighthouseapp.com/projects/23201-geekgameboard now i want your expert opinion that would it be better to make a game in cocos2d using the first option or the second option? both looks promising to me and give good control over board design. ps: sorry for duplicates, i found about the http://gamedev.stackexchange.com/ lately after posting it on stackexchange. so i am just posting it here again as i feel its more relevant board.

    Read the article

  • Is it possible to add registry entries to the wine registry and make illustrator work?

    - by Prasad
    I haven't done this kind of work before but I really need Adobe Illustrator to get work on ubuntu! I don't care if it is cs3 or 4. I have installed CS3 and 4 master collection on windows and with wine on ubuntu can't run it (yes, no registry entries added to the wine!) I can copy all the needed file to the /home/prasad/.wine/dosdevices/C: directory with hidden files included, but how to add registry entries to them? (windows registry editor like thing to wine) is it possible to make illustrator run in ubuntu like that, i tried to install Master collection but it failed number of times. I use ubuntu 10.10

    Read the article

  • Changes to File Store Provider in UCM PS3

    - by Kevin Smith
    In the recent PS3 release of UCM (11.1.1.4.0) there are some significant changes to the File Store Provider (FSP) configuration. For new PS3 installs (not upgrades from PS2) the FSP default storage rule includes a dispersion rule that will change the web-layout and vault paths by adding dispersion directories to the paths to limit the number of files in the vault and web-layout directories. What that means is that if you install a new PS3 UCM instance and migrate content in from a previous version of UCM the web URL will change. That is a critical problem for web sites and just general document management. See below for some details on the FSP configuration in PS3 and how you can change the default behavior. use the link below to read the rest of this post where I describe the issue in detaill and provide instructions for how to modify a PS3 instance to use the old format for the web-layout path.

    Read the article

  • When using membership provider, do you use the user ID or the username?

    - by Chris
    I've come across this is in a couple of different applications that I've worked on. They all used the ASP.NET Membership Provider for user accounts and controlling access to certain areas, but when we've gotten down into the code I've noticed that in one we're passing around the string based user name, like "Ralph Waters", or we're passing around the Guid based user ID from the membership table. Now both seem to work. You can make methods which get by username, or get by user ID, but both have felt somewhat "funny". When you pass a string like "Ralph Waters" you're passing essentially two separate words that make up a unique identifier. And with a Guid, you're passing around a string/number combination which can be cast and made unique. So my question is this; when using Membership Provider, which do you use, the username or the user ID to get back to the user? Thanks all!

    Read the article

  • HTML5 &lt;VIDEO/&gt; + IE9

    Yesterday at MIX Dean (general manager of the IE team) announced the availability of the first IE9 Platform Preview for developers. Dean also committed to updating the preview approximately every eight weeks. There is a good article on Beta News covering some of the technical details of the release. A key part of the announcements was the support for hardware accelerated HTML5 including supporting the video tag with the H.264 codec. What Im going to write next is based on a number of years of observations...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTML5 &lt;VIDEO/&gt; + IE9

    Yesterday at MIX Dean (general manager of the IE team) announced the availability of the first IE9 Platform Preview for developers. Dean also committed to updating the preview approximately every eight weeks. There is a good article on Beta News covering some of the technical details of the release. A key part of the announcements was the support for hardware accelerated HTML5 including supporting the video tag with the H.264 codec. What Im going to write next is based on a number of years of observations...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Size limit while using UICollectionView as tiled map for iOS game?

    - by Alexander Winn
    I'm working on a turn-based strategy game for iOS, (picture Civilization 2 as a template example), and I'm considering using a UICollectionView as my game map. Each cell would be a tile, and I could use the "didSelectCell" method to handle player interaction with each tile. Here's my question: I know that UICollectionViewCells are dequeued and reused by the OS, so does that mean that the map could support an effectively infinitely-large map, so long as only a few cells are onscreen at a time? However many cells were onscreen would be held in memory, and obviously the data source would take up some memory, but would my offscreen map be limited to a certain size or could it be enormous so long as the number of cells visible at any one time wasn't too much for the device to handle? Basically, is there any memory weight to offscreen cells, or do only visible cells have any impact? Also, does a UICollectionView seem like a bad idea for a game map, in a way I haven't thought of yet? It seems like it work well, but I haven't tried it yet so any thoughts are welcome.

    Read the article

  • Replace Broadcom "wl" driver with "b43"

    - by Laszlo Boros
    I'm using Ubuntu 10.04.4 LTS, and in my laptop there is a Broadcom BCM4312 wlan card. lspci output: 04:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g (rev 01) Subsystem: Broadcom Corporation Device 04b5 Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at f4500000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [13c] Virtual Channel Capabilities: [160] Device Serial Number 81-ac-1d-ff-ff-12-54-92 Capabilities: [16c] Power Budgeting Kernel driver in use: wl Kernel modules: wl, ssb So as you can see, the current (and default) driver is wl - installed with jockey. But I have another Ubuntu based distribution on my laptop (BackTrack linux), which is also 10.04, but it has the b43 driver installed and the overall performance is MUCH better. So I would like to install it on this OS too, but even google didn't help me. So my question is how to install the latest b43 driver on my Ubuntu?

    Read the article

  • Loads wrong resolution when installing

    - by Kevin DB
    I'm trying to install Ubuntu 11.04 from a USB. My computer does load the USB file and I get some sort of BIOS-like screen where I can choose between 'Install from USB' and 'Boot-up from USB'. When I entered the corresponding number my screen resolution is totally messed up. I see that Ubuntu is starting up when I used Boot from USB, but the screen looks the same as you'd have a too high screen resolution. Same story with Install from USB. I can see the screens loading and the menu's and stuff, but not clearly because it load in a too high screen resolution. I'm trying to dual-boot is with Windows XP and max. screen resolution is 800x600.

    Read the article

  • Mobile app technology choice - popularity trend data?

    - by Ryan Weir
    I'm familiar with the arguments for HTML5 apps over native, but was looking for some numbers or data to indicate a trend of how popular they are relative to each other for mobile app development. E.g. Surveys among programmers, data collected from the various app stores, number of downloads of development tools for those platforms. Your source could consider new apps, existing apps, categorized by downloads, app downloads weighted by popularity - basically any source you've got I would like to see. In my own personal monkey-sphere of developers, HTML5 seems to be starting to dominate as of about 6 months ago over iOS and Android by a wide margin as the technology stack preference - so I was wondering if this reflects a trend that's been measured globally and if there was objective data to support it.

    Read the article

  • Google I/O 2012 - What's New in the Google Drive SDK

    Google I/O 2012 - What's New in the Google Drive SDK Josh Hudgins, John Day-Richter In this talk, we will introduce a number of major new features and platforms to the Google Drive SDK. We will discuss what we feel is a revolution in the way developers write collaborative applications. We will also announce a new API to make managing files in Google Drive even easier for developers, replacing some legacy APIs in the process. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 556 6 ratings Time: 55:14 More in Science & Technology

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • GPT Not mounting using "normal" GPT mounting techniques 12.04

    - by Roy Markham
    I've got two 2TB drivess: one MBR and the other GPT. sudo blckid /dev/sdb1 returns a blank. gdisk shows: Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Warning! Secondary partition table overlaps the last partition by 1970 blocks! You will need to delete this partition or resize it in another utility. Disk /dev/sdb: 3907027055 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): 38A1113D-B5E9-4B69-ABFF-ACB27AFB3DDD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907027021 Partitions will be aligned on 8-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 34 262177 128.0 MiB 0C01 Microsoft reserved part 2 264192 3907028991 1.8 TiB 0700 Basic data partition mounting via fstab or -t gives same error when using NTFS or NTFS-3g "NTFS signature is missing" GParted says one partition is overwriting another, yet windows shows no errors at all. The drive is also mounted easily via MacOs (triple boot)

    Read the article

  • Benchmarking CPU processing power

    - by Federico Zancan
    Provided that many tools for computers benchmarking are available already, I'd like to write my own, starting with processing power measurement. I'd like to write it in C under Linux, but other language alternatives are welcome. I thought starting from floating point operations per second, but it is just a hint. I also thought it'd be correct to keep track of CPU number of cores, RAM amount and the like, to more consistently associate results with CPU architecture. How would you proceed to the task of measuring CPU computing power? And on top of that: I would worry about a properly minimum workload induced by concurrently running services; is it correct to run benchmarking as a standalone (and possibly avulsed from the OS environment) process?

    Read the article

  • 24 Hours of PASS: 15 Powerful Dynamic Management Objects - Deck and Demos

    - by Adam Machanic
    Thank you to everyone who attended today's 24 Hours of PASS webcast on Dynamic Management Objects! I was shocked, awed, and somewhat scared when I saw the attendee number peak at over 800. I really appreciate your taking time out of your day to listen to me talk. It's always interesting presenting to people I can't see or hear, so I relied on Twitter for a form of nearly real-time feedback. I would like to especially thank everyone who left me tweets both during and after the presentation. Your feedback...(read more)

    Read the article

  • Copying logins to another server

    - by DavidWimbush
    I'm busy setting up a new server to replace our main live server and part of that is to get the logins copied over. The database users will come over when I restore the databases but I wanted to get the logins they relate to, with the same SIDs, passwords and other properties as they have on the current server. In fact I don't even know the passwords for the logins created by our Sage accounting package - apparently they are generated by the setup using a number of ingredients unique to each installation. I did some Googling and fount this KB article: http://support.microsoft.com/kb/918992/, which more or less did the trick. It produces a set of CREATE LOGIN statements with the SIDs and hashed passwords. But it didn't include the default language, which can subtly or dramatically alter the behaviour of date-related SQL. So I added that bit and you can help yourself here.

    Read the article

  • Prevent Apache restarting automatically after upgrading packages

    - by HorusKol
    Following on from an earlier question: Is there a way to download security updates and notify admin without installing the update? A large number of packages interact with Apache (especially PHP) such that security updates to those packages can cause the server to attempt to restart the service. While my earlier question was answered, I'm now thinking that I need a different solution. So - is there are way to allow security updates to be applied using apt, have an email sent to an administrator, and, most importantly, prevent services from being restarted at the end of the installation/update process? The administrator will then be able to log in and restart the service manually.

    Read the article

  • Project Euler 16: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 16.  As always, any feedback is welcome. # Euler 16 # http://projecteuler.net/index.php?section=problems&id=16 # 2^15 = 32768 and the sum of its digits is # 3 + 2 + 7 + 6 + 8 = 26. # What is the sum of the digits of the number 2^1000? import time start = time.time() print sum([int(i) for i in str(2**1000)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • SAP Applications Run Better on Oracle Exadata

    - by jgelhaus
    To yield the results necessary to stay competitive, your business-critical applications must be able to access the most reliable and up-to-date information. That’s why a growing number of SAP application customers are turning to Oracle Exadata Database Machine for better performance, better productivity—and big savings. Watch our latest Webcast to find out why Oracle Exadata is the ideal platform for running your SAP applications. You’ll learn how you can: Increase the performance of SAP applications Enhance reliability with a centralized, scalable platform Ensure quick, safe, and easy deployments Watch it now. Highlights include customer case studies and practical deployment strategies. Watch our latest on-demand Webcast to find out why Oracle Exadata is the ideal platform for running your SAP applications. Learn how to increase the performance of SAP applications, enhance reliability with a centralized, scalable platform and ensure quick, safe and easy deployments.

    Read the article

  • Microsoft launches two new Data Centres for Azure in US to meet growing demand

    - by Gopinath
    In order to meet the growing demand for Windows Azure in US, Microsoft has launched two new data centres in US – East US and West US. With the addition of these two data centres the number of Azure data centres across the globe has grown to 8 and 4 among them are located in US. The two new data centres are providing Computer and Storage resources and few enthusiastic customers already deployed their applications. The other services like SQL Azure and AppFabric will be offered by these data centres in the coming months. The addition of new data centres is a good sign to Microsoft as the customer demand for their Cloud offering is growing. Amazon Web Services is the pioneer in Cloud Computing and they offer wider range of Cloud Services compared to Microsoft. Source: Windows Azure Blog

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >