Search Results

Search found 356 results on 15 pages for 'datacenter'.

Page 13/15 | < Previous Page | 9 10 11 12 13 14 15  | Next Page >

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

  • Hyper-V snapshots – unable to start VM

    - by ahmedz
    I restarted my Host server after shutting down three guest VMs. After I restarted the machine I tried to start the VMs and got an error stating the the VM failed to start. SERVERNAME failed to start. Attachment 'avhd file path' is read only. Please provide read/write access to the attachment. Error: 'General access denied error' SERVENAME failed to start. (virtual machine ID 17292200-wd22-dd22-d23-dddddd2222) The issue seems to be with the disk space. The VHD file for this VM is 128 GB and there are two AVHD files of 58 and 75 GB. Whereas the total disk space on this drive (E) is 280 GB - the free space is only around 23 GB. I understand that the error is caused by the unavailability of the required disk space. Unfortunately, I cannot increase the disk space on this drive. However I have another drive (D) that has 400 GB of free space. I exported this VM to D drive and then tried to add the copied AVHD files but it gives me a similar error. I am running Windows Server 2008 R2 Datacenter. Any help is appreciated.

    Read the article

  • Growing a small hosting company [closed]

    - by user2353007
    We currently have a few servers, 1 WHM VPS (2GB), 1 MS SQL VPS (2 GB), and 1 IIS VPS (2GB). The VPS servers are doing fine as far as uptime and response times but we would like to add the following features. 1) monitoring with load statistics 2) failover I have looked a Zabbix, Zenoss, Nagios, and a couple of other cloud solutions like monitor.us and watchdog from Zerigo. Ideally for the monitoring solution. Our current hosting company suggested we get a dedicated server or VPS and install load balancing software (not sure I like that idea). I've looked into Rackspace and Amazon load balancers which seem like the most feasible solutions for load balancers. Does anybody have any input on the monitoring and load balancing products I'm looking into? Monitoring should monitor uptime as well as give reports on memory usage, disk usage, processor usage, and which processes/websites/users are responsible for the load. It would be ideal if the load balancer worked with any IP. Not sure if either Rackspace or Amazon load balancers would allow load balancing with servers outside their datacenter. Thank you.

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Reminder: Benefícios da Virtualização para ISVs - 14/Dez/10, Porto

    - by Paulo Folgado
    Esta formação aborda as principais dificuldades com que os Independent Software Vendors (ISVs) se confrontam quando têm de escolher as plataformas sobre as quais irão certificar, instalar e suportar as suas aplicações, e como o Oracle VM (e o Oracle Enterprise Linux) os podem ajudar a ultrapassar essas dificuldades. O modelo de negócio clássico de um ISV - desenvolver uma solução aplicacional para resolver um determinado problema de negócio, analizar o mercado para determinar quais os sistemas operativos e o hardware que os clientes do seu mercado alvo usam, e decidir suportar as plataformas hardware e software que 80% dos seus clientes do seu mercado alvo usam (e tratar como excepções outras configurações que lhe sejam solicitadas por alguns clientes importantes) - funcionou bem no anos 80 e princípios dos anos 90, quando havia uma menor diversidade de plataformas. Contudo, com o aparecimentos nos últimos anos de múltiplas versões de sistemas operativos e de "sabores" Linux, este modelo começou a tornar-se um pesadelo. Cada cliente tem a sua plataforma de eleição e espera dos ISV que suportem essas suas opções, o que constitui um sorvedouro dos recursos e dos custos dos ISVs. As tecnologias de virtualização da Oracle, ao permitirem "simular" uma determinada configuração de hardware, fazendo com que o sistema operativo "pense" que está correr numa configuração de hardware pré-definida e normalizada, na qual correm as aplicações, constituem um veículo excelente para os ISVs que procuram uma solução simples, fácil de instalar e fácil de suportar para instalação das suas aplicações, permitindo obter grandes economias de custos em termos de desenvolvimento, teste e suporte dessas aplicações. Quem deve assistir? Esta formação dirige-se sobretudo a quem que tomar decisões sobre as plataformas tecnológicas que o ISV tem de suportar, assim como a quem lida com a estrutura de custos da suas operações, com uma visão dos custos associados ao desenvolvimento, certificação, instalação e suporte de múltiplas plataformas. Se quer saber mais sobre o Oracle VM e como ele pode ajudar a reduzir drasticamente os sues custos, não perca esta formação. AGENDA: 09:00 Welcome & Introduction  ISV Partner View... Why Use Virtualization?   The ISV Deployment Dilemma: The Problem of Supporting Multiple Platforms  How can Virtualization Help?  The use of Templates What is a Template?  How are Templates Created?  Customer's Point of View  Assembly Builder  Weblogic Virtual Edition Managing Oracle VM Best Practices for Virtualizing Oracle Database 11g  Managing Virtual Environments  Coffee Break   Oracle Complete and Integrated Virtualization Portfolio From Datacenter to Desktop  The Next Generation Virtualization  Private Cloud with Middleware Virtualization  Benefits of Using Oracle VM (and Oracle Enterprise Linux) Support Advantages  Production Ready Virtual Machines  Licensing Terms  Partner Resources and OPN Benefits  12:45 Q&A and Wrap-up  Data: 14 de Dezembro - 09h00 / 13h00Local: Oracle Portugal, Av. da Boavista, 1837- Edifício Burgo - Escritório 13.4, 4100-133 PORTO Audiência: Responsáveis de Desenvolvimento, de Tecnologia e Serviços dos parceiros ISV da Oracle Formação realizada pela Altimate

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • Oracle Social Network and the Flying Monkey Smart Target

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I teased this before OpenWorld, and for those of you who didn’t make it to the show or didn’t come by the Office Hours to take the Oracle Social Network Technical Tour Noel (@noelportugal) ran, I give you the Flying Monkey Smart Target. In brief, Noel built a target, about two feet tall, which when struck, played monkey sounds and posted a comment to an Oracle Social Network Conversation, all controlled by a Raspberry Pi. He also connected a Dropcam to record the winner just prior to the strike. I’m not sure how it all works, but maybe Noel can post the technical specifics. Here’s Noel describing the Challenge, the Target and a few other tidbit in an interview with Friend of the ‘Lab, Bob Rhubart (@brhubart). The monkey target bits are 2:12-2:54 if you’re into brevity, but watch the whole thing. Here are some screen grabs from the Oracle Social Network Conversation, including the Conversation itself, where you can see all the strikes documented, the picture captured, and the annotation capabilities: #gallery-1 { margin: auto;? } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }    That’s Diego in one shot, looking very focused, and Ernst in the other, who kindly annotated himself, two of the development team members. You might have seen them in the Oracle Social Network Hands-On Lab during the show. There’s a trend here. Not by accident, fun stuff like this has becoming our calling card, e.g. the Kscope 12 WebCenter Rock ‘em Sock ‘em Robots. Not only are these entertaining demonstrations, but they showcase what’s possible with RESTful APIs and get developers noodling on how easy it is to connect real objects to cloud services to fix pain points. I spoke to some great folks from the City of Atlanta about extending the concepts of the flying monkey target to physical asset monitoring. Just take an internet-connected camera with REST APIs like the Dropcam, wire it up to Oracle Social Netwok, and you can hack together a monitoring device for a datacenter or a warehouse. Sure, it’s easier said than done, but we’re a lot closer to that reality than we were even two years ago. Another noteworthy bit from Noel’s interview, beginning at 2:55, is the evolution of social developer. Speaking of, make sure to check out the Oracle Social Developer Community. Look for more on the social developer in the coming months. Noel has become quite the Raspberry Pi evangelist, and why not, it’s a great tool, a low-power Linux machine, cheap ($35!) and highly extensible, perfect for makers and students alike. He attended a meetup on Saturday before OpenWorld, and during the show, I heard him evangelizing the Pi and its capabilities to many people. There is some fantastic innovation forming in that ecosystem, much of it with Java. The OTN gang raffled off five Pis, and I expect to see lots of great stuff in the very near future. Stay tuned this week for posts on all our Challenge entrants. There’s some great innovation you won’t want to miss. Find the comments. Update: I forgot to mention that Noel used Twilio, one of his favorite services, during the show to send out Challenge updates and information to all the contestants.

    Read the article

  • Microsoft hosting free Hyper-V training for VMware Pros

    - by Ryan Roussel
    Microsoft will be hosting free training for virtualization professionals focused on Hyper-V, System Center, and virtualization architecture.  Details are below:   Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros.  Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as an online class on March 29-31, 2010 from 10:00am-4:00pm PDT.    The course is COMPLETELY FREE and OPEN TO ANYONE!  Please share with your customers, blog, Tweet, etc. – help us get the word out to strengthen support for Microsoft’s virtualization offerings. What’s the high-level overview? This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS).  Register Now!   Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering) 10:00am – 10:30pm PDT:  Virtualization 360 Overview 10:30am – 12:00pm:  Microsoft Hyper-V Deployment Options & Architecture 1:00pm – 2:00pm:  Differentiating Microsoft and VMware (terminology, etc.) 2:00pm – 4:00pm:  High Availability & Clustering Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions) 10:00am – 11:00pm PDT:  System Center Suite Overview w/ focus on DPM 11:00am – 12:00pm:  Virtual Machine Manager 2012 | Part 1 1:00pm –   1:30pm:  Virtual Machine Manager 2012 | Part 2 1:30pm – 2:30pm:  Automation with System Center Opalis & PowerShell 2:30pm – 4:00pm:  Private Cloud Solutions, Architecture & VMM SSP 2.0 Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI) 10:00am – 11:00pm PDT:  Virtual Desktop Infrastructure (VDI) Architecture | Part 1 11:00am – 12:00pm:  Virtual Desktop Infrastructure (VDI) Architecture | Part 2 1:00pm – 2:30pm:  v-Alliance Solution Overview 2:30pm – 4:00pm:  Application Delivery for VDI     Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes Who is the target audience for this training? Suggested prerequisite skills include real-world experience with Windows Server 2008 R2, virtualization and datacenter management. The course is tailored to these types of roles: · IT Professional · IT Decision Maker · Network Administrators & Architects · Storage/Infrastructure Administrators & Architects How do I to register and learn more about this great training opportunity? · Register: Visit the Registration Page and sign up for all three sessions · Blog: Learn more from the Microsoft Learning Blog · Twitter: Here are a few posts you can retweet: o Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V @MSLearning #Hyper-V o @SysCtrOpalis Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V #Hyper-V o Learn all the cool new features in Hyper-V & System Center 2012! SCVMM, Self-Service Portal 2.0, http://bit.ly/JS-Hyper-V #Hyper-V #Opalis What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications.  All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • Oracle Enterprise Manager Ops Center 12c Update 1 is available now

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following the announcement of Oracle Enterprise Manager Ops Center 12c on April 4th, we are happy to announce the release of Oracle Enterprise Manager Ops Center 12c update 1. This is a bundled patch release for Oracle Enterprise Manager Ops Center.  Here are the key features of the Oracle Enterprise Manager Ops Center 12c update 1 : Oracle VM SPARC Server Pool HA Policy  Automatically Upgrade from Ops Center 11g update 3 and Ops Center 12c  Oracle Linux 5.8 and 6.x Support  Oracle VM SPARC IaaS (Virtual Datacenters) WANBoot Improvements with OBP Handling Enhancements SPARC SuperCluster Support Stability fixes This new release contains significant enhancements in the update provisioning, bare metal OS provisioning, shared storage management, cloud/virtual datacenter, and networking management sections of the product.  With this update, customers can achieve better handling of ASR faults, add networks and storage to virtual guests more easily, understand IPMP and VLAN configurations better, get a more robust LDAP integration, get  virtualization aware firmware patching, and observe improved product performance across the board.  Customers can now accelerate Oracle VM SPARC and T4 deployments into production . Oracle Enterprise Manager Ops Center 11g and Ops Center 12c customers will now notice the availability of new product update under the Administration tab within the  Browser User Interface (BUI) .  Upgrade process is explained in detail within the Ops Center Administration Guide under “Chapter 10: Upgrading”.  Please be sure to read over that chapter and the Release Notes before upgrading.  During the week of July 9th,  the full download of the product will be available from the Oracle Enterprise Manager Ops Center download website.  Based on the customer feedback, we have changed the updates to include the entire product. Customers no longer need to install Ops Center 12c and then upgrade to the update 1 release.  The can simply install Ops Center 12c update 1 directly.  Here are some of the resources that can help you learn more about the Oracle Enterprise Manager Ops Center and the new update 1. Oracle Enterprise Manager Ops Center OTN site Bi-Monthly Product Demos Oracle Enterprise Manager Ops Center Forum Oracle Enterprise Manager Ops Center MOS Community Watch the recording of Oracle Enterprise Manager 12c launch webcast by clicking the following banner. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Oracle at ARM TechCon

    - by Tori Wieldt
    ARM TechCon is a technical conference for hardware and software engineers, Oct. 30-Nov 1 in Santa Clara, California. Days two and three of the conference will be geared towards systems designers and software developers, those interested in building ARM processor-based modules, boards, and systems. It will cover all of the hardware and software, tools, ranging from low-power design, networking and connectivity, open source software, and security. Oracle is a sponsor of ARM TechCon, and will present three Java sessions and a hands-on-lab:  "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi" - The Raspberry Pi, an ARM-powered single board computer running a full Linux distro off an SD card has caused a huge wave of interest among developers. This session looks at how Java can be used on a device such as this. Using Java SE for embedded devices and a port of JavaFX, the presentation includes a variety of demonstrations of what the Raspberry Pi is capable of. The Raspberry Pi also provides GPIO line access, and the session covers how this can be used from Java applications. Prepare to be amazed at what this tiny board can do. (Angela Caicedo, Java Evangelist) "Modernizing the Explosion of Advanced Microcontrollers with Embedded Java" - This session explains why Oracle Java ME Embedded is the right choice for building small, connected, and intelligent embedded solutions, such as industrial control applications, smart sensing, wireless connectivity, e-health, or general machine-to-machine (M2M) functionality---extending your business to new areas, driving efficiency, and reducing cost. The new Oracle Java ME Embedded product brings the benefits of Java technology to microcontroller platforms. It is a full-featured, complete, compliant software runtime with value-add features targeted to the embedded space and has the ability to interface with additional hardware components, remote manageability, and over-the-air software updates. It is accompanied by a feature-rich set of tools free of charge. (Fareed Suliman, Java Product Manager) "Embedded Java in Smart Energy and Healthcare" - This session covers embedded Java products and technologies that enable smart and connect devices in the Smart Energy and Healthcare/Medical industries. (speaker Kevin Lee) "Java SE Embedded Development on ARM Made Easy" - This Hands-on Lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices. (speaker Jim Connors) In the Oracle booth #603, you can see the following demos: Industry Solutions with JavaThis exhibit consists of a number of industry solutions and how they can be powered by Java technology deployed on embedded systems.  Examples in consumer devices, home gateways, mobile health, smart energy, industrial control, and tablets all powered by applications running on the Java platform are shown.  Some of the solutions demonstrate the ability of Java to connect intelligent devices at the edge of the network to the datacenter or the cloud as a total end-to-end platform.Java in M2M with QualcommThis station will exhibit a new M2M solutions platform co-developed by Oracle and Qualcomm that enables wireless communications for embedded smart devices powered by Java, and share the types of industry solutions that are possible.  In addition, a new platform for wearable devices based on the ARM Cortex M3 platform is exhibited.Why Java for Embedded?Demonstration platforms will show how traditional development environments, tools, and Java programming skills can be used to create applications for embedded devices.  The advantages that Java provides because of  the runtime's abstraction of software from hardware, modularity and scalability, security, and application portability and manageability are shared with attendees. Drop by and see why Java is an optimal applications platform for embedded systems.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • Oracle/Sun ?????? - SPARC SuperCluster T4-4

    - by user12798668
    SPARC SuperCluster T4-4 ?????????? SPARC SuperCluster ? 2010?12?????·???????????????????? 2011 ? 9 ?? SPARC T4 ???????????? SPARC SuperCluster T4-4 ????????????????SPARC SuperCluster ??????????·??????????????????????????? SPARC T4 CPU ? 4 ????? SPARC T4-4 ??????????????????????·????????????????????? Exadata ????????????? Oracle Exadata Storage Server ????????????? Java ????????? Exalogic ????????????? Exalogic Software ???????????????????????? Solaris 10 ??? 11 ??????????????????????? SPARC SuperCluster ? ???????????????????? ???????????????????????SPARC SuperCluster ? Oracle/Sun ???????????????????????????????????? SPARC SuperCluster ??????????? 2(Half Rack ?) or 4(Full Rack ?) x SPARC T4-4 ???? 3 (Half Rack ?) or 6 (Full Rack ?) x Exadata Storage Server X2-2 1 x ZFS Storage Appliance 7320 ?????? 3 x Sun DataCenter InfiniBand Switch 36 1 x Ethernet Management Switch 42U Rack (2 x PDU) SPARC SuperCluster ????????????? OS: Oracle Solaris 11 ??? 10 ???: Oracle VM Server for SPARC ??? Oracle Solaris Zones ??: Oracle Enterprise Manager Ops Center ??? Grid Control ???????: Oracle Solaris Cluster ??? Oracle Clusterware ??????: Oracle ?????? 11g R2 (11.2.0.3) ???????????? ??????: Exalogic Software ???? Oracle WebLogic Server, Coherence ????????: Oracle Solaris 11 ??? 10 ????????? Oracle ???? ISV????????????? SPARC SuperCluster ???????·??????????????????????? ???????????????????????????????????????? ??????????????????????????????????????????? ??????????????????????????????????????? ???????????????????? SPARC SuperCluster ??????????????????????????????? ??????????? SuperCluster ?????????????Oracle OpenWorld Tokyo 2012 ????????????????????! 4 ? 5 ?????????????????????????????? Oracle OpenWorld Tokyo 2012 ??????????? SPARC SuperCluster ???????????????? ????????????????? 4/5(?) ????????? G2-01 ?SPARC SuperCluster ????????????????? Ops Center ????????????????(11:50 - 13:20) 4/5(?) S2-42 ???UNIX?????????? - SPARC SuperCluster? (16:30 - 17:15) 4/5(?) S2-53 ?Oracle E-Business Suite?????????????????? ??/??????????????????????”SPARC SuperCluster”?(17:40 - 18:25) ???????????!! Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ???????????????

    Read the article

  • How to get local ActiveMQ broker to "mirror" a queue on a remote ActiveMQ broker?

    - by T.K.
    I have a local ActiveMQ broker which is on an unreliable internet connection, and also a remote ActiveMQ broker in a reliable datacenter. I have already sorted out a "store and forward" setup so that outgoing messages are sent to the remote broker when the Internet connection is available. That alone works great, but when messages are outbound. However, now I have to do the reverse. Here is the scenario: A new message appears in the remote ActiveMQ broker. The message is put into a specific queue. In a few minutes, the Internet connection becomes available to the local ActiveMQ broker. The local broker should then be able to pull the message from the remote broker, and place it in its own local queue. Local consumers will then be able to see the message. So in essence, I need the local broker to become a subscribed consumer to the remote queue. I have looked through the ActiveMQ documentations but I can't find anything yet about how to do this in the .xml configuration file. Is this what I should be looking for? See: "ActiveMQ: JMS to JMS Bridge". Any advice and tips would be highly appreciated.

    Read the article

  • Silverlight ClientAccessPolicy issue...I think

    - by Terrence
    Fisrt of all I have my ClientAccessPolicy.xml file in the root of my website. If I access my website using the public domain name like this: h t t p://www.mydomain.com and then go to the page where my SL control is, I get the spinning % numbers up until about 98%, then it quits and my SL control does not appear on the page. If I access my website using the machine name (website is at datacenter, we have vpn setup) like this: h t t p://machinename and then go to the page where my SL control is everything works fine. this must be a ClientAccess Policy issue don't your think? Or what DO you thnik the issue is? Thanks in advance. Here is the contents of my ClientAccessPolicy.xml file: <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*" /> </allow-from> <grant-to> <resource path="/" include-subpaths="true" /> </grant-to> </policy> </cross-domain-access> </access-policy>

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • Problems with DNS propagation 10 days after a change was made

    - by runlevel6
    The engineering team I work with has been in the process of moving equipment from one datacenter to another. Ten days ago we moved one of our name servers authoritative for our client's domains (ns1.faithhiway.com) and updated its IP address with its respective DNS provider (register.com) to point to the new datacenter. All tests done show that this name server is correctly running at its new location and when queried, returning the correct response for any domains it is responsible for. The problem is that well after 72 hours had gone by we were still seeing more DNS activity at its old IP address than at the new. The good news is that we kept a name server responding on the old IP address for the time being so we are not seeing any issues with the domains our nameserver is responsible for but the goal is to retire that as soon as possible. As you can see from WhatsMyDNS.net, a decent amount of propagation has occurred over the last 10 days since we made this change, but still there are some locations reporting our original IP. Considering that the TTL is only 3600 with the name servers responsible for this domain, it does not make any sense to myself or the other engineers working with me that we are having this issue. Now if I run a DNS check using one of the Register.com DNS servers (direct nameservers for faithhiway.com), I get the following (correct) result: # dig @dns01.gpn.register.com ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @dns01.gpn.register.com. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43232 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 5 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. ;; ADDITIONAL SECTION: dns01.gpn.register.com. 3600 IN A 98.124.192.1 dns02.gpn.register.com. 3600 IN A 98.124.197.1 dns03.gpn.register.com. 3600 IN A 98.124.193.1 dns04.gpn.register.com. 3600 IN A 69.64.145.225 dns05.gpn.register.com. 3600 IN A 98.124.196.1 ;; Query time: 50 msec ;; SERVER: 98.124.192.1#53(98.124.192.1) ;; WHEN: Thu Jan 27 15:16:57 2011 ;; MSG SIZE rcvd: 269 Just as a reference, here are the results when the same query is checked against a variety of Public DNS servers: Google: # dig @8.8.8.8 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @8.8.8.8. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12773 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 997 IN A 206.127.2.71 ;; Query time: 29 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 27 15:17:31 2011 ;; MSG SIZE rcvd: 52 Level 3: # dig @4.2.2.1 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @4.2.2.1. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46505 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 2623 IN A 206.127.2.71 ;; Query time: 7 msec ;; SERVER: 4.2.2.1#53(4.2.2.1) ;; WHEN: Thu Jan 27 15:18:35 2011 ;; MSG SIZE rcvd: 52 Verizon: # dig @151.197.0.38 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @151.197.0.38. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32658 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; Query time: 81 msec ;; SERVER: 151.197.0.38#53(151.197.0.38) ;; WHEN: Thu Jan 27 15:19:15 2011 ;; MSG SIZE rcvd: 52 Cisco: # dig @64.102.255.44 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @64.102.255.44. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39689 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. ;; Query time: 105 msec ;; SERVER: 64.102.255.44#53(64.102.255.44) ;; WHEN: Thu Jan 27 15:20:05 2011 ;; MSG SIZE rcvd: 165 OpenDNS: # dig @208.67.222.222 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @208.67.222.222. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12328 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169507 IN A 207.200.19.162 ;; Query time: 6 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jan 27 15:19:29 2011 ;; MSG SIZE rcvd: 52 SpeakEasy: # dig @66.93.87.2 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @66.93.87.2. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9342 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169323 IN A 207.200.19.162 ;; Query time: 69 msec ;; SERVER: 66.93.87.2#53(66.93.87.2) ;; WHEN: Thu Jan 27 15:19:51 2011 ;; MSG SIZE rcvd: 52 As you can see above, the majority of queries are returning the correct result. But a few (OpenDNS and SpeakEasy in the examples above) are still showing the old IP address. Considering the length of time that has gone by, it seems obvious to me that either we have made a mistake and not thoroughly handled the DNS changes on our end (likely) or there is a problem with either the DNS provider for this domain (Register) or with some of the DNS servers out in the wild (rather unlikely). Any advice on how I can proceed with this? UPDATE (January 31, 2011): First of all, I apologize for the length of both the original question and this update. I contemplated removing some of the excess from the original post but just in case this problem and its solution are helpful to someone else in the future I'm just going to leave everything as it is. Anyway, I've been doing some more research into this problem, and have discovered the following interesting occurrence. While running a check on the glue records for faithhiway.com always resolve correctly, if I go and check a client domain (where ns1.faithhiway.com is authoritative), I get a strange response. It looks like the root servers are returning nsX.faithhiway.com as their old IP addresses still (under Additional Section). Because we have a server still there responding to DNS queries, the trace finishes and returns the correct IP addresses as the final step (again, under Additional Section). The example below uses one of the domains that we use that uses ns1.faithhiway.com as its authoritative DNS server. # dig +trace +nosearch +all +norecurse ignitemail.com ; <<>> DiG 9.2.4 <<>> +trace +nosearch +all +norecurse ignitemail.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46856 ;; flags: qr ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 7986 IN NS a.root-servers.net. . 7986 IN NS b.root-servers.net. . 7986 IN NS c.root-servers.net. . 7986 IN NS d.root-servers.net. . 7986 IN NS e.root-servers.net. . 7986 IN NS f.root-servers.net. . 7986 IN NS g.root-servers.net. . 7986 IN NS h.root-servers.net. . 7986 IN NS i.root-servers.net. . 7986 IN NS j.root-servers.net. . 7986 IN NS k.root-servers.net. . 7986 IN NS l.root-servers.net. . 7986 IN NS m.root-servers.net. ;; Query time: 39 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 228 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16325 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. ;; ADDITIONAL SECTION: a.gtld-servers.net. 172800 IN A 192.5.6.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN A 192.33.14.30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 ;; Query time: 64 msec ;; SERVER: 198.41.0.4#53(a.root-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 504 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12860 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: ignitemail.com. 172800 IN NS ns1.faithhiway.com. ignitemail.com. 172800 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 172800 IN A 207.200.19.162 ns2.faithhiway.com. 172800 IN A 207.200.50.142 ;; Query time: 152 msec ;; SERVER: 192.54.112.30#53(h.gtld-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 111 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43016 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; ANSWER SECTION: ignitemail.com. 3600 IN A 206.127.2.64 ;; AUTHORITY SECTION: ignitemail.com. 3600 IN NS ns1.faithhiway.com. ignitemail.com. 3600 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 3600 IN A 206.127.2.71 ns2.faithhiway.com. 3600 IN A 206.127.2.72 ;; Query time: 25 msec ;; SERVER: 206.127.2.71#53(ns1.faithhiway.com) ;; WHEN: Mon Jan 31 09:22:18 2011 ;; MSG SIZE rcvd: 127 I really think this is a problem we have somewhere in our setup, but whether it is ignorance of something with DNS on my or my fellow engineer's end or just a dumb mistake we made, I have yet to find it.

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

< Previous Page | 9 10 11 12 13 14 15  | Next Page >