Search Results

Search found 2224 results on 89 pages for 'scientific computing'.

Page 4/89 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Cloud Computing : publication du volet 3 du Syntec Numérique

    - by Eric Bezille
    Une vision client/fournisseur réunie autour d'une ébauche de cadre contractuel Lors de la Cloud Computing World Expo qui se tenait au CNIT la semaine dernière, j'ai assisté à la présentation du nouveau volet du Syntec numérique sur le Cloud Computing et les "nouveaux modèles" induits : modèles économiques, contrats, relations clients-fournisseurs, organisation de la DSI. L'originalité de ce livre blanc vis à vis de ceux déjà existants dans le domaine est de s'être attaché à regrouper l'ensemble des acteurs clients (au travers du CRIP) et fournisseurs, autour d'un cadre de formalisation contractuel, en s'appuyant sur le modèle e-SCM. Accélération du passage en fournisseur de Services et fin d'une IT en silos ? Si le Cloud Computing permet d'accélérer le passage de l'IT en fournisseur de services (dans la suite d'ITIL v3), il met également en exergue le challenge pour les DSI d'un modèle en rupture nécessitant des compétences transverses permettant de garantir les qualités attendues d'un service de Cloud Computing : déploiement en mode "self-service" à la demande, accès standardisé au travers du réseau,  gestion de groupes de ressources partagées,  service "élastique" : que l'on peut faire croitre ou diminuer rapidement en fonction de la demande mesurable On comprendra bien ici, que le Cloud Computing va bien au delà de la simple virtualisation de serveurs. Comme le décrit fort justement Constantin Gonzales dans son blog ("Three Enterprise Principles for Building Clouds"), l'important réside dans le respect du standard de l'interface d'accès au service. Ensuite, la façon dont il est réalisé (dans le nuage), est de la charge et de la responsabilité du fournisseur. A lui d'optimiser au mieux pour être compétitif, tout en garantissant les niveaux de services attendus. Pour le fournisseur de service, bien entendu, il faut maîtriser cette implémentation qui repose essentiellement sur l'intégration et l'automatisation des couches et composants nécessaires... dans la durée... avec la prise en charge des évolutions de chacun des éléments. Pour le client, il faut toujours s'assurer de la réversibilité de la solution au travers du respect des standards... Point également abordé dans le livre blanc du Syntec, qui rappelle les points d'attention et fait un état des lieux de l'avancement des standards autour du Cloud Computing. En vous souhaitant une bonne lecture...

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • JavaScript distributed computing project

    - by Ben L.
    I made a website that does absolutely nothing, and I've proven to myself that people like to stay there - I've already logged 11+ hours worth of cumulative time on the page. My question is whether it would be possible (or practical) to use the website as a distributed computing site. My first impulse was to find out if there were any JavaScript distributed computing projects already active, so that I could put a piece of code on the page and be done. Unfortunately, all I could find was a big list of websites that thought it might be a cool idea. I'm thinking that I might want to start with something like integer factorization - in this case, RSA numbers. It would be easy for the server to check if an answer was correct (simply test for modulus equals zero), and also easy to implement. Is my idea feasible? Is there already a project out there that I can use?

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Commercial uses for grid computing?

    - by paxdiablo
    I keep hearing from associates about grid computing which, from what I can gather, is highly distributed stuff along the lines of SETI@Home. Is anyone working on these sort of systems for business use? My interest is in figuring out if there's a commercial reason for starting software development in this field.

    Read the article

  • Suggestions for open source testing tool for cloud computing

    - by vikraman
    Hi, I want to know if there is any open source testing tool for cloud computing. We have built a cloud framework with Xen, Eucalyptus, Hadoop, HBase as different layers. I am not looking at testing each of these tools separately, but i want to test them from the perspective of fitting into a cloud environment (for example scalability of xen hypervisor to handle multiple VMs). Would be great if you can suggest me some tool (open source) for the above.

    Read the article

  • Languages/Methods to Learn for Scientific Computing?:

    - by Zéychin
    I'm a second-semester Junior working towards a Computer Science degree with a Scientific Computing concentration and a Mathematics degree with a concentration on Applied Discrete Mathematics. So, number crunching and such rather than a bunch of regular expressions, interface design, and networking. I've found that I'm not learning new relevant languages from my coursework and am interested in what the community would recommend me to learn. I know as far as programming methods go, I need to learn more about parallelizing programs, but if there's anything else you can recommend, I would appreciate it. Here's a list of the languages with which I am very experienced (web technologies omitted as they barely apply here). Any recommendations for additional languages I should learn would be very much appreciated!: Java C C++ Fortran77/90/95 Haskell Python MATLAB

    Read the article

  • Opportunities in Cloud Computing

    - by Paul Sorensen
    A recent article from CIO Journal indicates that there is an extreme labor shortage (in certain technology areas) that is is leading to upward pressure on wages for IT Workers. This represents a great opportunity for those with certain skill-sets, among which include Java (Oracle certification is mentioned specifically). The article points out that a key driver of the labor shortage is the expansion of cloud computing. Cloud computing is set up to make life extremely simple for end-users, but the model pushes the complexity to back-end systems which are sophisticated, enterprise-level computing stacks (Oracle has an extensive set of cloud computing solutions). These complex systems require very highly-skilled IT professionals (the best-of-the-best) to successfully develop, implement, administer and maintain them. What this mean for you is that there is opportunity for those who have the appropriate skills at the appropriate levels. If you want to be a part of this opportunity you should do a self-assessment of your own skill-sets and experience. Based upon your results you can decide where it would be most appropriate to spend your time and resources for the highest return on your investment. By expanding and sharpening your skills and by gaining greater experience you will be better prepared to take advantage of career opportunities (like this) that come along periodically. As you evaluate your needs remember that Oracle University has a tremendous selection of high-quality eduction offerings (including training and certification) that can you help move your career forward. Thanks and best of luck!

    Read the article

  • Parallel Computing in .Net 4.0

    - by kaleidoscope
    Technorati Tags: Ram,Parallel Computing in .Net 4.0 Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs Parallel Extensions in .NET 4.0 provides a set of libraries and tools to achieve the above mentioned objectives. This supports two paradigms of parallel computing Data Parallelism – This refers to dividing the data across multiple processors for parallel execution.e.g we are processing an array of 1000 elements we can distribute the data between two processors say 500 each. This is supported by the Parallel LINQ (PLINQ) in .NET 4.0 Task Parallelism – This breaks down the program into multiple tasks which can be parallelized and are executed on different processors. This is supported by Task Parallel Library (TPL) in .NET 4.0 A high level view is shown below:

    Read the article

  • Technical Computing

      Today, Microsoft announced our Technical Computing initiative.    Through the Technical Computing initiative, we will enable scientists, engineers and analysts to more easily model the world at much greater fidelity.  The Technical Computing initiative will address a wide range of users.  One of the most critical elements is to help developers create applications that can take advantage of parallelism on their desktop, in a cluster, and in public and private clouds. ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • With MSDN and BizSpark, Cloud Computing is Closer than You Think

    Cloud computing offers significant advantages for businesses of all sizes, and it's easier to get started than you think. Microsoft makes Windows Azure compute time available for MSDN subscribers, as well as for software start-ups through the Microsoft BizSpark program. Learn why cloud computing is a good fit for you and how you can get started.

    Read the article

  • With MSDN and BizSpark, Cloud Computing is Closer than You Think

    Cloud computing offers significant advantages for businesses of all sizes, and it's easier to get started than you think. Microsoft makes Windows Azure compute time available for MSDN subscribers, as well as for software start-ups through the Microsoft BizSpark program. Learn why cloud computing is a good fit for you and how you can get started.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Quantum Computing and Encryption Breaking

    - by Earlz
    Ok, I read a while back that Quantum Computers can break most types of hashing and encryption in use today in a very short amount of time(I believe it was mere minutes). How is it possible? I've tried reading articles about it but I get lost at the a quantum bit can be 1, 0, or something else. Can someone explain how this relates to cracking such algorithms in plain English without all the fancy maths?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >