Search Results

Search found 11610 results on 465 pages for 'cloud apps'.

Page 14/465 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • ReadyNAS issue with Google Apps?

    - by Jauder Ho
    The power went out (again) in my house today so I decided to set up some alerting. Since I have a ReadyNAS and the latest version of Raidinator seems to have SMTP TLS support, I figured I would try setting things up to email to a domain I have hosted on Google Apps. At this point, I have everything working IF I use a Gmail account but as soon as I switch to a Google Apps email address, it stops working and complains with smtpstatus=535 smtpmsg='535-5.7.1 Username and Password not accepted. Learn more at \n535 5.7.1 http://mail.google.com/support/bin/answer.py?answer=14257 30sm16076226wfd.23' errormsg='authentication failed (GNU SASL, method PLAIN)' exitcode=EX_NOPERM I'm wondering if anyone else has encountered this. Google's extremely aggressive captcha does not help but I am able to log in now without a captcha from a browser so I'm open to any ideas why the simple switch of a user/password combo that is supposed to work does not. I'm also attaching my config so that others can see how to set things up.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Google I/O 2010 - Developing web apps for Chrome Web Store

    Google I/O 2010 - Developing web apps for Chrome Web Store Google I/O 2010 - Developing web apps for the Chrome Web Store Chrome 101 Erik Kay Google Chrome is a powerful platform for developing web apps. With Chrome web apps, we're making it easier for users to discover and use these apps. Learn how to build and sell apps for the Chrome Web Store. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:00:29 More in Science & Technology

    Read the article

  • Changing SPF (Sender Policy Framework) record for Google Apps

    - by bobo
    My boss asked me to set up Google Apps for a client and basically I have done everything including setting up MX records in DirectAdmin and re-creating the email accounts in Google Apps. I also sent a few test emails to ensure that it actually works and it seems fine. But then I discovered this article talking about changing the SPF record for the domain. http://www.google.com/support/a/bin/answer.py?answer=178723 After reading the introduction I think it would be better for me to change the SPF record according to this article. So I logged in to the DirectAdmin and navigated to the DNS management, and then I found that there's already a TXT SPF record there: v=spf1 a mx a:spf.cabin.com.hk include:gmail.com -all And it looks like it's already including gmail.com, but according to the article it should be: _spf.google.com rather than: gmail.com I dare not to change it before I understand what this record actually means. What would you do with this record if you were me?

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Google Apps For Business, SSO, AD FS 2.0 and AD

    - by Dominique dutra
    We are a small company with 22 people in the office. We had a lot of problems with e-mail in the past so I decided to change over to Google Apps for Business. It is the perfect solution for us, except for one thing: I need to be able to control the access to the mailboxes. Only users inside the office, authenticated to AD, or users authenticated to our VPN can connect to gmail. From what I've read it is possible using the SSO (Single Sign On) solution provided by Google - but i am having some trouble finding consistent information about it. First of all, our infrastructure: Windows Server 2008 R2 Active Directory, one domain only. Kerio Control for QoS and VPN. That's about it on our side. On Google Apps' side, I have one account, and 03 domains that my users use to log in. The main domain has most of the users, but the are a couple of people that login using one of the subdomains. I have a 03 domains because I run mail for 03 companies and wanted all to be in within the same control panel. Well, I found some guides on the internet but none of them cover the AD FS installation part. I've read somewhere that I needed to download AD FS 2.0 directly from Microsoft.com, because the one that came with Windows Server was a old version. I downloaded it (adfsSetup.exe) and tried to install but got an error, saying that I needed a Windows Server 2008 Sp2 for that program. My Windows Server 2008 is R2. I really need some help here, this is very importand, I dont want to have to pay $1000 for a SSO solution when i have an AD set up. Can someone please point me out to the right direction? Where can I find an AD FS 2.0 setup compatible with R2 would be a good start, or the one that came with r2 is already the 2.0 version. After the initial setup, there are some guides on the internet about the Google Apps part. It seems to be really easy. I also tried adding AD FS role, but there are a bunch of options wich I have no idea what means, and I coudn't find any guide covering that on the internet. I dont have a lot of experience with Windows Server, but I have a company wich is certificated and provide us with support. I can ask for their help in the later setup, but I dont think ADFS is a very common thing to deal with.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Google Apps for Mail - MX entry related doubt

    - by niting
    I have signed up for google apps recently for my organisation. The google apps guide says that I need to edit the MX entry for my domain so that the mails get redirected to the google mail servers instead of my default mail server. But, I am having a doubt whether to edit the MX entry on my domain name provider or the hosting server. My domain name provider is godaddy.com and my server is ServInt. And, moreover, what difference does it make if I edit the MX entries on my hosting provider or my domain name provider. Thanks, niting

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • outgoing mail for web app (multiple domains as sender)

    - by solid
    I have a web app "myapp.com" that users can use to set up their own websites. Our application is written in php and should be able to do the following: send mails to our own users "from: [email protected]" send mails from our clients to their clients "from: [email protected]" We don't need to take care of incoming mails, just send out mails with the correct from and reply-to addresses. We cannot make this work using Google Apps (limited to our own domain in the from-field) and we cannot make google apps or google apps domains for all our clients, so we are looking for another simple to manage and set up solution. Does anyone have experience with this, please let me know! Thanks

    Read the article

  • How to move mail among Google Apps for Domains users

    - by Paul Roub
    Considering moving the domain used by my extended family for email to Google Apps. One less server for me to manage, better spam filtering, etc. One thing that's been nice about running my own has been the way I manage my kids' incoming email - it comes to me first, and I drop good mail in a symlinked IMAP folder that we share. A little procmail is all it takes, and straight-through exceptions are easy to implement. (FYI, no I'm not advocating censorship, but manually filtering spam and viruses from my 8-year-old's inbox seems like the right thing to do. YMMV) Anyway. I'm wondering if there's an easy way to do something similar in Google Apps - setting up filters to auto-redirect to me looks easy enough (any gotchas there?), but moving things back is not obvious. Yes, I could access both accounts via IMAP and drag mails across, but does anyone have an easier way?

    Read the article

  • Multiple Java Apps on single Server

    - by kdssea
    What is the best way to host potentially dozens of fairly trivial Java web applications on a single machine? These will be for different clients, and so having them isolated from each other is important, so that if one goes down, they don't all go down. I realize Tomcat can handle multiple apps on its own, but having them all run in a single tomcat instance sounds a little scary to me. An instance of tomcat per app also seems silly, since these apps are likely to be fairly basic. Any thoughts?

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Some free cloud solution to enhance your business

    - by Saif Bechan
    I am co-owner of a small internet business. I am in charge of IT, and I try to get things done as low cost as possible. When investing in servers, resources and overall business costs your project can soon turn into a financial disaster. Cloud solutions can help you in solving some financial problems, they can help you in scalability problems, and overall performance problems of your server or web application. Recently I moved the whole internal/external communication(email,calendar,documents) of my business to the cloud. I did this by using the free version of Google Apps. This works great and is a big advantage on multiple levels. I do not have to fight spam anymore on my system, and there are less resources used on my system. Also switching servers will go a lot easier. Questions Can you name some cloud solution that you have used, or some you just recommend. They can fairy form financial benefits, organizational benefits, performance benefits. It doesn't matter as soon as it helps you spread the load of your business.

    Read the article

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Oracle's Cloud Computing Events

    - by Peeyush Tugnawat
    Here is a useful link to Oracle full day events on Cloud Computing worldwide http://www.oracle.com/events/cloudcomputing/index.html   Other Oracle Cloud Computing Resources Oracle's Cloud Computing Products and Services Oracle's Cloud Computing Resource Center   Others My Previous Post about Cloud Computing

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • My Cloud : un service de Cloud personnel , une alternative sérieuse aux services de stockage en ligne ?

    My Cloud : un service de cloud personnel , une alternative sérieuse aux services de stockage en ligne ? Le fabricant Western Digital profite du phénomène PRISM pour proposer une nouvelle approche en matière de stockage des données numériques dans les petites entreprises ou en famille. WD, filiale de Western Digital propose My Cloud, un service de Cloud personnel dont l'une des promesses est la simplicité d'utilisation. Avec cette alternative aux solutions de Cloud public proposées par des...

    Read the article

  • Cloud security and privacy

    - by Rakesh K
    Hi, I have a very basic doubt regarding cloud computing that is catching up pretty fast these days. To my understanding, cloud computing is a paradigm in which companies put up their data and applications on somebody else's machines aka 'The Cloud'. I want to know just how secure is it to put up my data on some third party machines, especially if my data contains private details. In particular, how can an enterprise trust the cloud computing service providers in this data privacy aspect? Thanks, rakesh.

    Read the article

  • cloud and existing enterprise applications technologies

    - by maxxxee
    What is the significance of new cloud platforms and databases like Microsoft Azure and Amazon EC2? Is it a replacement for enterprise application platforms like .net or JEE in a cloud environment? Is it neccessary to use these or other cloud specific platforms, or can we implement .net or JEE on a cloud based environment?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >