Search Results

Search found 3589 results on 144 pages for 'cluster computing'.

Page 85/144 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Les PME plébiscitent le Cloud selon IBM, un point de vue modéré en Europe par Forrester

    Les PME plébiscitent le Cloud, selon IBM Un point de vue modéré en Europe par Forrester Les entreprises planifieraient d'augmenter leurs budgets IT et de s'orienter beaucoup plus largement vers le Cloud Computing. Ce sont en tout cas les prévisions d'IBM pour les 12 prochains mois, après avoir mené une étude auprès de 2112 dirigeants de PME. L'adoption des technologies et/ou de projets en mode Cloud seront donc un facteur stratégique majeur de 2011 pour les PME. L'étude d'IBM affirme même que les 2/3 des PME planifient ou déploient actuellement un projet de Cloud pour améliorer la gestion de leur environnement IT. Cette orientation vers le Cloud se justifie, toujours d'après Big Bl...

    Read the article

  • Is OO-programming really as important as hiring companies place it?

    - by ale
    I am just finishing my masters degree (in computing) and applying for jobs.. I've noticed many companies specifically ask for an understanding of object orientation. Popular interview questions are about inheritance, polymorphism, accessors etc. Is OO really that crucial? I even had an interview for a programming job in C and half the interview was OO. In the real world, developing real applications, is object orientation nearly always used? Are key features like polymorphism used A LOT? I think my question comes from one of my weaknesses.. although I know about OO.. I don't seem to be able to incorporate it a great deal into my programs. I would be really interested to get peoples' thoughts on this!

    Read the article

  • I'm trying to install postgresql on 12.04, and it's just not working

    - by Pointy
    I've got a new installation and I'm trying to get Postgresql working. The database was installed and I started a restoration from a dump on another machine, but that ran into problems because I had forgotten to install the "contrib" package. I used "pg_dropcluster" to drop the old cluster so I could start from scratch, and that's when things went weird. The first manifestation of this was that /etc/postgresql was just empty. I uninstalled the postgresql package and reinstalled, several times, to no avail. Is there something I can do to figure out why apt is confused here? I've done this many times on many systems and never seen anything like this happen.

    Read the article

  • Welcome to the Database Cloud CoverAge blog

    - by B R Clouse
    Welcome to the Database Cloud CoverAge blog, brought to you by Oracle's Database Cloud Architecture Team. We've spent the past few years developing best practices for database consolidation projects, how to deliver Database as a Service, and for designing and driving corporate cloud initiatives. Many of our experiences and lessons learned are available in a growing collection of collateral that you can find on our OTN page.We decided to join the blogosphere to distill key concepts into short posts that you, our readers, can digest quickly. Also, this medium allows you to comment on our posts and collateral -- to share experiences, challenge our conclusions, critique our recipes, and help us choose topics to blog about. Watch for our next posting, which will start a series on your journey into cloud computing.

    Read the article

  • March 2011 Chicago Information Technology Architects Group Meeting

    - by Tim Murphy
    How did we get to March already?  My how time flies when you are having fun.  We had a spirited discussion on Enterprise Architecture at the February meeting.  Well lets keep the fun rolling.  The hottest technology right now is anything to do with mobile computing.  We had an arm wrestling match to decide who was going to present on Mobile Architecture.  Come see the winner (actually the guy who had time to put the presentation together) on March 15th at the Chicago Information Technology Architects meeting.  You can register at the link below. Register If have a topic you would be interested in presenting at a future event please contact me through this blog. del.icio.us Tags: CITAG,Chicago Information Technology Architects Group,mobile architecture

    Read the article

  • links for 2011-02-15

    - by Bob Rhubart
    Why the hybrid cloud model is the best approach | Cloud Computing - InfoWorld Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it, says David Linthicum.  (tags: davidlinthicum cloud) Exadata Part V: Monitoring with Database Control The Oracle Instructor Uwe Hesse shows how "we can use Oracle Enterprise Manager Database Control to monitor an Exadata Database Machine, especially the Storage Servers (Cells). " (tags: oracle exadata) ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter (Oracle E-Business Suite Technology) "This live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter." (tags: oracle soa) Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS (Oracle Fusion Middleware für den Finanzsektor) "Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration." (tags: oracle adf)

    Read the article

  • Microsoft offre 30 jours d'essai gratuit à Dynamics CRM Online pour marquer son « retour en force » dans les CRM

    Dynamics CRM Online : le retour en force du CRM de Microsoft Microsoft propose une version d'essai gratuite de 30 jours Les TechDays 2011 ont été l'occasion pour Développez.com de retrouver Sophie Jacquet, Chef de produit Microsoft, pour faire le point sur Dynamics CRM. Microsoft Dynamics CRM s'offre en effet une nouvelle jeunesse avec la sortie de sa version 2011 et surtout une version SaaS, baptisée « Dynamics CRM Online », lancée avant la version sur site. Une chronologie qui confirme le virage stratégique Cloud Computing pris par Microsoft et affiché lors de ces TechDays. Destinée à améliorer la productivité des services commerciaux, la nouvelle versio...

    Read the article

  • OWB és heterogén adatforrások, Oracle Magazine, 2010. május-június

    - by Fekete Zoltán
    Megjelent az Oracle Magazine aktuális száma (naná, az aktuális számnak ez a dolga. Oracle Magazine, 2010. május-június. Ebben a számban sok érdekes cikk közül válogathatunk: cloud computing, Java, .Net, új generációs backup, párhuzamosság és PL/SQL, OWB,... Ajánlom a Business Intelligence - Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases cikket, melyben megtudhatjuk, hogyan használhatunk heterogén adatforrásokat az Oracle Warehouse Builder ETL-ELT eszközzel, hogyan tudunk például SQL Serverhez csatlakozni, és nagy teljesítménnyel adatokat kinyerni. Az Oracle adatintegrációs weblapja. Ez a gazdag heterogenitás az OWB az Oracle Data Integrator testvér termékbol jön. Az adatintegrációs SOD azt mondja, hogy ez a két Java alapú termék, az OWB és az ODI egy termékben fognak egyesülni.

    Read the article

  • Featured partner: Avnet To Supply Oracle Enterprise Cloud Management Solutions In Middle East & North Africa Region

    - by Javier Puerta
    "Global IT solutions distribution leader, Avnet Technology Solutions have been approved to distribute Oracle Enterprise Manager 12c, a complete, integrated and business-driven enterprise cloud management solution, in the Middle East & North Africa region. This will help Avnet which serve customers and suppliers in more than 70 countries to accelerate partners’ business growth in the region while providing support and enablement services to help them quickly address local opportunities. Oracle Enterprise Manager 12c creates business value from IT by leveraging the built-in management capabilities of the Oracle stack for traditional and cloud environments. Using this solution, customers have reported 12 times faster achievement of IT-business alignment. According to Senior Director Oracle business MENA, Avnet Technology Solutions, Hani Barakat, “Enterprises in the Middle East and North Africa region can increase their efficiency and responsiveness while reducing costs and complexity for traditional data centers, virtualised, and cloud computing environments with the help of Oracle Enterprise Manager 12c.” See full press release in "Ventures Africa"

    Read the article

  • Moving OVD from Test to Production

    - by mark.wilcox
    Customer asked support "How to move a test OVD server to production". There is a couple of ways to do this. One way is to clone the environment: http://download.oracle.com/docs/cd/E15523_01/core.1111/e10105/testprod.htm#CH... Another way - which is particularly useful if you want to push configuration from a parent OVD server to children in a cluster: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10046/basic_server_set... Note if you use the second option and you have any data in a Local Store Adapter - you may also need to use the oidcmprec tool to synchronize that data: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10029/replic_mng_mon.h... Posted via email from Virtual Identity Dialogue

    Read the article

  • Ikoula : 500 serveurs virtuels dédiés offerts, disponibles sur deux configurations et incluent MySQL et SQLServer

    Ikoula : 500 serveurs virtuels dédiés offerts Disponibles sur deux configurations et incluent MySQL et SQLServer Ikoula, leader du Cloud Computing en France propose une promotion limitée sur son offre de serveurs virtuels dédiés. 500 Flex'Servers sont en effet offerts aux premiers inscrits à cette promotion valable jusqu'au 31 janvier prochain, et ce durant 3 mois et sans engagement de durée. [IMG]http://idelways.developpez.com/news/images/flex_image.gif[/IMG] Construits sur une architecture haut de gamme (Dell, Cisco, ...) robuste et performante, les Flex'Servers de iKoula s'adaptent potentiellement à tous les besoins. Le nombre de pro...

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Where is a postgresql 9.1 database stored in ubuntu 12.04?

    - by celenius
    I installed and created a Postgresql database on ubuntu. I then created the database using the following command: sudo su postgres createdb mydatabase However, I can't figure out where the database was initialized. I would like to be able to edit the hba.conf file and postgresl.conf files. When I view the database using pgadmin I see the following information: CREATE DATABASE mydatabase WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1; Any thoughts on how I can find the database cluster location?

    Read the article

  • How can I obtain in-game data from Warcraft 3 from an external process?

    - by Slav
    I am implementing a behavior algorithm and would like to test it with my lovely Warcraft III game to watch how it will fight against real players. The problem I'm having is that I don't know how to obtain information about in-game state (units, structures, environment, etc.) from the running WC3 game. My algorithm needs access to the hard drive and possibly distributed computing, that's why JASS (WC3's editor language) isn't appropriate; I need to run my algorithm from a separate process. Direct3D hooking is an approach, but it wasn't done for WC3 yet and a significant drawback of that approach would be the inability to watch how the AI performs online, since it uses the viewport to issue commands. How I read in-game data from WC3 in a different process in a fastest and easiest way?

    Read the article

  • how to get started with a game engine [closed]

    - by user19343
    I'm a 3rd year Computer Science student and I would like to get started with building a game engine or at least tinkering with making one. I am curious if there are any good resources to use to get started. I get the idea behind different pieces in the engine, but I'm not really sure about how they fit together. Is there anything out there to help teach me the skeleton of a game engine? So far I've been playing with the idea of a game engine that uses modules built in a circular linked list so that each can do it's computing and then pass move to the next piece of the engine to work.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • Le grand public ne comprend rien au "Cloud" en tout cas aux États-Unis, est-ce différent ailleurs ?

    Le grand public ne comprend rien au ?Cloud? En tout cas aux Etats-Unis, est-ce différent ailleurs ? Cloud Computing, ce terme apparaît dans presque chaque nouvelle reliée au monde des nouvelles technologies. D'après une nouvelle enquête, le terme Cloud n'inspire pas plus qu'une masse blanche flottant dans le ciel pour 29% des Américains ! Est-ce différent ailleurs ? [IMG]http://idelways.developpez.com/news/images/cloud-wtf.jpg[/IMG] Selon un récent sondage national mené par le bureau d'études "Wakefield Research" et commandé par Citrix, la plupart des Américains semblent confus au sujet du Cloud. Qu'est-ce que cela veut dire réellement ? Et comme...

    Read the article

  • Spezialisierung ohne Grenzen

    - by A&C Redaktion
    Arrow erreicht Exadata Spezialisierung für alle EMEA-Länder “Know-how sells” – das weiß auch unser VAD Arrow. Der IT-Distributor aus Fürstenfeldbruck, nahe München, hat sich auf die Bereitstellung von Enterprise und Midrange Computing Lösungen fokussiert. So auch für die Exadata Technologie von Oracle. Exadata beinhaltet Server, Speicher, Netzwerktechnik und Datenbanksoftware in einem System und hilft so, auch große Datenmengen – die „Big Data“ – spielend zu managen. Die Kombination aus Hard- und Software bietet Oracle Partnern enorme Geschäftspotenziale im Verkauf und im Service, deshalb ist eine Expertise so wichtig. Durch die vier europäischen Demo-Zentren und insgesamt acht komplett installierte Exadata reichlich Erfahrung mit der Oracle Exa-Familie sammeln können. Der VAD bietet Oracle Partnern und Kunden Performance-Tests, Testumgebungen und Proof of Concepts (PoC) an – und das länderübergreifend. Als logische Konsequenz wurde Arrow im August 2012 mit der EMEA Spezialisierung für Exadata von Oracle ausgezeichnet! Wir gratulieren ganz herzlich und wünschen viel Erfolg mit dem Exa-Stack!

    Read the article

  • Spezialisierung ohne Grenzen

    - by A&C Redaktion
    Arrow erreicht Exadata Spezialisierung für alle EMEA-Länder “Know-how sells” – das weiß auch unser VAD Arrow. Der IT-Distributor aus Fürstenfeldbruck, nahe München, hat sich auf die Bereitstellung von Enterprise und Midrange Computing Lösungen fokussiert. So auch für die Exadata Technologie von Oracle. Exadata beinhaltet Server, Speicher, Netzwerktechnik und Datenbanksoftware in einem System und hilft so, auch große Datenmengen – die „Big Data“ – spielend zu managen. Die Kombination aus Hard- und Software bietet Oracle Partnern enorme Geschäftspotenziale im Verkauf und im Service, deshalb ist eine Expertise so wichtig. Durch die vier europäischen Demo-Zentren und insgesamt acht komplett installierte Exadata reichlich Erfahrung mit der Oracle Exa-Familie sammeln können. Der VAD bietet Oracle Partnern und Kunden Performance-Tests, Testumgebungen und Proof of Concepts (PoC) an – und das länderübergreifend. Als logische Konsequenz wurde Arrow im August 2012 mit der EMEA Spezialisierung für Exadata von Oracle ausgezeichnet! Wir gratulieren ganz herzlich und wünschen viel Erfolg mit dem Exa-Stack!

    Read the article

  • If I intend to use Hadoop is there a difference in 12.04 LTS 64 Desktop and Server?

    - by Charles Daringer
    Sorry for such a Newbie Question, but I'm looking at installing M3 edition of MapR the requirements are at this link: http://www.mapr.com/doc/display/MapR/Requirements+for+Installation And my question is this, is the Desktop Kernel 64 for 12.04 LTS adequate or the "same" as the Server version of the product? If I'm setting up a lab to attempt to install a home cluster environment should I start with the Server or Dual Boot that distribution? My assumption is that the two are the same. That I can add any additional software to the 64 as needed. Can anyone elaborate on this? Have I missed something obvious?

    Read the article

  • Is OpenStack suitable as a fault tolerant DB host?

    - by Jit B
    I am trying to design a fault tolerant DB cluster (schema does not matter) that would not require much maintenance. After looking at almost everything from MySQL to MongoDB to HBase I still find that no DB is easily scalable - Cassandra comes close but it has its own set of problems. So I was thinking what if I run something like MySQL or OrientDB on top of a large openstack VM. The VM would be fault tolerant by itself so I dont need to do it st DB level. Is it viable? Has it been done before? If not then what are the possible problems with this approach?

    Read the article

  • Les ventes record d'Office 2010 montrent-elles que le 100% Cloud n'est pas encore mûr ? Le vice-président Microsoft savoure ses resultats

    Les ventes record d'Office 2010 montrent-elles que le 100 % Cloud n'est pas encore mûr ? Le vice-président de Microsoft chargé du produit savoure ses résultats La suite bureautique de Microsoft Office 2010 à déjà un an. L'outil essentiellement Desktop, est sorti à une période ou les regards et les investissements de plusieurs éditeurs (Microsoft compris) étaient orientés vers les plates-formes et les infrastructures de Cloud Computing. L'avènement des solutions Cloud se présentait comme une menace pour le produit phare de Microsoft qui n'a pas manqué d'essuyer des vagues de critiques lors de la sortie de la suite bureautique. «

    Read the article

  • Box2Dweb very slow on node.js

    - by Peteris
    I'm using Box2Dweb on node.js. I have a rotated box object that I apply an impulse to move around. The timestep is set at 50ms, however, it bumps up to 100ms and even 200ms as soon as I add any more edges or boxes. Here are the edges I would like to use as bounds around the playing area: // Computing the corners var upLeft = new b2Vec2(0, 0), lowLeft = new b2Vec2(0, height), lowRight = new b2Vec2(width, height), upRight = new b2Vec2(width, 0) // Edges bounding the visible game area var edgeFixDef = new b2FixtureDef edgeFixDef.friction = 0.5 edgeFixDef.restitution = 0.2 edgeFixDef.shape = new b2PolygonShape var edgeBodyDef = new b2BodyDef; edgeBodyDef.type = b2Body.b2_staticBody edgeFixDef.shape.SetAsEdge(upLeft, lowLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowLeft, lowRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowRight, upRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(upRight, upLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) Can box2d really become this slow for even two bodies or is there some pitfall? It would be very surprising given all the demos which successfully use tens of objects.

    Read the article

  • Is the tap-to-click issue solved

    - by AWE
    I'm just an average Joe when it comes to computing (maybe less then the average Joe) but I hate tap-to-click. In system and settings there is no touchpad tab? Is it true that this has been fixed? I'm using Dell inspiron N5110 xinput list: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] This is really strange because Dell is one of the top manufacturers in laptops and Ubuntu one of the top distros in Linux and Canonical claims that they are working closely with Dell.

    Read the article

  • Le langage open source R amélioré par ses développeurs, plus stable, flexible et commercial

    Le langage open source R amélioré par ses développeurs, plus stable, flexible et commercial Le langage de programmation R est utilisé depuis plus de dix ans par les statisticiens pour réaliser des analyses de données, du modelage prédicatif et de la visualisation. Cette semaine, il va s'offrir des changements révolutionnaires avec une refonte commerciale visant à promouvoir son adoption. Revolution Computing, le vendeur de R sur le marché depuis deux ans, s'est renommé Revolution Analytics et a dévoilée une nouvelle roadmap pour ses outils R, en espérant que cela étendra le marché. Avec sa nouvelle gamme de produits Revolution R Entreprise, des nouvelles fonctionnalités ainsi que de nouveaux outils son...

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >