Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 582/1086 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • IIS7 url rewrite rules

    - by sympatric greg
    In a hosted environment, I will be utilizing subdomains (and virtual directories) for various coding projects. I have a rewrite rule that changes 'subdomain.domain.com/url' to 'domain.com/subdomain/url'. This worked fine, except that the browser couldn't find resources with paths generated by ResolveURL("~/something"). The server was using the Application Path of "/subdomain/" so based on the rewrite rule, the browser's request for "/subdomain/something" was being looked for in "/subdomain/subdomain/something" were it wasn't to be found. either of these urls were valid: http://www.domain.com/subdomain/something http://subdomain.domain.com/something I resolved this by adding a another url rewrite rule to the subdomain: <rule name="RemoveSuperDir"> <match url="subdomain/(.*)" /> <action type="Rewrite" url="{R:1}" /> </rule> So for each subdomain that I might add, I will need to add such a rule. Is there a way to write a single rule at the domain level to resolve this issue?

    Read the article

  • Windows CE: SDK Doesn’t Show up in Visual Studio 2008

    - by Bruce Eitman
    A customer recently contacted me because after installing an SDK it didn’t show up in Visual Studio 2008.  So being a good vendor I installed VS2008 and then installed the SDK – no problem the SDK showed up and I could create projects based on it. I let the customer know that the SDK definitely works with VS2008. The customer got back to me and asked what OS I was using. Hmm, how could that play into this? I told him that I use Windows XP, and it turned out that he is way more modern than I am and is using both Windows Vista and 7. The customer opened a support case with Microsoft. The answer turns out to be that the SDK install requires the user to be logged on as an administrator when installing on Windows Vista and 7 for the SDK to show up in Visual Studio 2008. This problem does not seem to exist for Visual Studio 2005 on those operating systems. The actual instructions from Microsoft Support are: 1)      Make sure Visual Studio 2008 is not running. I also shut down the device emulator manager but you may not be using that. 2)      Open a “Visual Studio 2008 Command Prompt” as Administrator. On Windows 7 just right click the short cut and pick the “Run as administrator” option. 3)      Enter the following command: msiexec /log SDKInstallLog.txt /package <the path to your .msi file> 4)      When asked if you wish to do a custom or complete install pick custom 5)      Instruct the installer to omit the installation of the documentation. This was something I found about CE 6 SDK installation issue and may have no bearing upon your problem but I did it anyway. 6)      Install   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Portable server room air-con options

    - by Bridgette
    We are looking for portable industrial aircond for our server room which whould blow hot air to the sealing cavity, split-system is not an option (http://www.ikoo.com.au/Aircond.png). Something exactly like this would be ideal, but unfortunately not available in AUS: http://www.apc.com/resource/include/techspec_index.cfm?base_sku=ACPA4000&ISOCountrycode=us http://www.apcmedia.com/salestools/ASTE-6Z2RUU_R1_EN.pdf. So we pretty much looking for competing products to the APC's NetworkAIR PA4000 available in AU. ? We currently have 3 x DeLonghi Penguino PACT120, but space is limited and getting more of these is prob not ideal.

    Read the article

  • MOSS 2007 SP2 - Error while provisioning SSP.

    - by Tim
    This is a brand new installation I am trying to provision the first SSP for MOSS and I keep getting the following error: (Provisioning failed: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)) The error also keeps appearing in the Application Event Log as event ID 7888. Google searches tell me its a connection between the Sharepoint server and the SQL server however, this is a production SQL Server which has several databases on it that do not experience any problems. These databases also include the Central Admin and Web Application databases for Sharepoint which are all working fine. Any guidance would be greatly appreciated. Thanks in advance.

    Read the article

  • Java Spotlight Episode 105: Mark Reinhold on the Future of Java

    - by Roger Brinkley
    Our yearly interview with Mark Reinhold, Chief Java Architect, Java Platform Group on the future of Java. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Two Java Update Releases New Java SE 6 software updates from Apple for OS X 10.8, 10.7 and 10.6 are now live and available to all customers via the Mac App Store / Software Update. The JavaFX Community Site on Java.net JSR 360: Connected Limited Device Configuration 8 JSR 361: Java ME Embedded Profile 2012 JCP EC Election Ballot open Meet the EC Candidates Recording and Materials Events Oct 22-23, Freescale Technology Forum - Japan, Tokyo, Japan Oct 23-25, EclipseCon Europe, Ludwigsburg, Germany Oct 30-Nov 1, Arm TechCon, Santa Clara, United States of America Oct 31, JFall, Hart van Holland, Netherlands Nov 2-3, JMaghreb, Rabat, Morocco Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Feature InterviewMark Reinhold is Chief Architect of the Java Platform Group at Oracle, where he works on the Java Platform, Standard Edition, and OpenJDK. His past contributions to the platform include character-stream readers and writers, reference objects, shutdown hooks, the NIO high-performance I/O APIs, library generification, and service loaders. Mark was the lead engineer for the 1.2 and 5.0 releases and the specification lead for Java SE 6. He is currently leading the Jigsaw and JDK 7 Projects in the OpenJDK Community. Mark holds a Ph.D. in Computer Science from the Massachusetts Institute of Technology. In this interview he discusses the future of Java Platform with regards to Jigsaw, Lambda, and Nashorn components as well as the OpenJDK community. What’s Cool QotD: Ubuntu 12.10 Release Notes on OpenJDK 7 New Lambda binary drop Development forest for Compact Profiles (JEP 161)

    Read the article

  • Install "Massive Coupon"

    - by ffffff
    I'want to install "Massive Coupon" http://github.com/robstyles/Massive-Coupon---Open-source-groupon-clone I've set up apache2 + mod_wsgi + mysql on Ubuntu 9 And written the following settings.py # Django settings for massivecoupon project. import socket, os . . DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. DATABASE_NAME = 'grouponpy' # Or path to database file if using sqlite3. DATABASE_USER = 'grouponpy' # Not used with sqlite3. DATABASE_PASSWORD = 'password' # Not used with sqlite3. DATABASE_HOST = 'localhost' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. What I have to do additional then?

    Read the article

  • PHP FastCGI HTTP Error 500 on Windows 7

    - by CJM
    I've just installed PHP (5.3.1) and MySQL (5.1.44) on my development machine. Then I used the Web Platform Installer to install a copy of Joomla and Drupal. However, when I tried to browse either site application, I get a HTTP Error 500: Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:808/drupal/index.php Physical Path D:\Projects\drupal\index.php Logon Method Anonymous Logon User Anonymous PHPInfo.php reports that FastCGI is configured (not sure if that is significant). Sure the fact that PHPInfo.php reports anything is perhaps an indication that PHP itself is working...? I'm struggling to know where to look for a solution... Each application appears to be configure similarly to my other [ASP/ASP.NET] applications.

    Read the article

  • Editing true type fonts

    - by Parsa
    In typical Persian fonts which are True Type, there is a historical problem with yeh and kafs. These fonts are created for Windows 98, which didn't include full Persian support, and now, we have 2 kind of Kafs: Keheh(0x6a9, ?), and Arabic Kaf(0x643, ?), and 2 kind of Yehs: Farsi Yeh(0x6cc, ?), and Arabic Yeh(0x64a, ?). Old fonts use Arabic ones, but the standard keyboard for Persian uses the Persian ones of course. Is it possible to edit and fix these fonts? I've made many attempts to replace these characters with FontLab Studio, which I failed. Any suggestions?

    Read the article

  • Binding Super+C Super+V to Copy and Paste

    - by solo
    For some time I've been interested in binding the Windows Key (Super_L) on my keyboard to Copy and Paste for no other reason but convenience and consistency between my desktop and my MacBook. I thought that I was close after reading about xmodmap and executing the following: $ # re-map Super_L to Mode_switch, the 3rd col in keymap table `xmodmap -pke` $ xmodmap -e "keycode 133 = Mode_switch" $ # map Mode_switch+c to copy $ xmodmap -e "keycode 54 = c C XF86_Copy C" $ # map Mode_switch+v to paste $ xmodmap -e "keycode 55 = v V XF86_Paste V" Unfortunately, XF86Copy and XF86Paste don't seem to work, at all. They are listed in /usr/include/X11/XF86keysym.h and xev shows that the key sequence is being interpreted by X as XF86Paste and XF86Copy, do these symbols actually work? Do they have to have application level support?

    Read the article

  • Adding MySQL servers/ data nodes into database clustering without restarting mysql cluster

    - by Dwayne Johnson
    I currently have mysql clustering up and running. For high scalability is there a way to include either mysql node, data nodes, or management nodes without restarting the entire cluster. I wish to understand how is it implement or is there a documentation I can read. I believe only the latest version can support this. I am running NDB 7.0. I am aware that I am able to add the nodes online, but it requires me perform a rolling restart. What other approach I can take to implement this without restarting in my network?

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • AIIM Best Practice Awards to Two Oracle Customers

    - by [email protected]
    On Tuesday night at the AIIM Awards Banquet, two Oracle customers and their implementation partners won awards for their Oracle Enterprise 2.0 implementations. The Bureau of Indian Affairs, a division of the Department of Interior, won a Carl E. Nelson Best Practices Award for their implementation of Oracle WebCenter and Oracle Content Management to provide an interactive social media environment to engage and inform their constituent communities. The BIA Citizen Portal provides all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. This integration was achieved with the support of Oracle partner Mythics. The Charles Town Police Department integrated Oracle Content Management to integrate with and support their police evidence system. This integration was created in partnership with Oracle partner EDAC Systems Inc. Diane Hoppe of EDAC Systems Inc. was on hand to receive the award for Charles Town Police Department. You can see pictures of our award winners here: Linus Chow, Oracle; John Mancini, President of AIIM; and Diane Hoppe, EDACS - Charles Town Police: John Mancini, President of AIIM; Linus Chow, Oracle; Chris Baker, Mythics; and Bureau of Indian Affairs Oracle, EDACS, Mythics, BIA You can read more in the AIIM press release.

    Read the article

  • Show Notes: Bob Hensle on IT Strategies from Oracle

    - by Bob Rhubart
    The latest ArchBeat Podcast (RSS) features a conversation with Oracle Enterprise Architecture director Bob Hensle (LinkedIn). Bob talks about IT Strategies from Oracle, an extensive library of reference architectures, best practices, and other documents now available (it’s a freebie!) to registered Oracle Technology Network members. Listen to Part 1 Bob offers some background on the IT Strategies from Oracle project and an overview of the included documents. Listen to Part 2 (Feb 16) A discussion of how SOA and other issues are reflected in the IT Strategies documents. Share your feedback on any of the documents in the IT Strategies from Oracle Library: [email protected] For a nice complement to the IT Strategies from Oracle Library, check out Oracle Experiences in Enterprise Architecture, an ongoing series of short essays from members of the Oracle Enterprise Architecture team based on their field experience. In the Pipeline ArchBeat programs in the works include an interview with Dr. Frank Munz, the author of Middleware and Cloud Computing, excerpts from another architect virtual meet-up, and a conversation with Oracle ACE Director Debra Lilley about her insight into Fusion Applications. . Stayed tuned: RSS Technorati Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture del.icio.us Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Procmail lock failures and errors while writing

    - by user58292
    I'm setting up a mail server on an embedded linux system. When sending mail to a local user I get the following error from procmail: procmail: Lock failure on "/home/mail/ktos/.mailspool.lock" procmail: Error while writing to "/home/mail/ktos/.mailspool" procmail: Error while writing to "/var/spool/mail/ktos" From root@waben Wed Dec 15 10:00:40 2010 Folder: **Bounced** 0 procmail: Lock failure on "/root/.mailspool.lock" procmail: Error while writing to "/root/.mailspool" From MAILER-DAEMON Wed Dec 15 10:00:41 2010 Subject: Returned mail: see transcript for details Folder: /var/spool/mail/root 1732 And the mail goes to /var/spool/mail/root. This is my /etc/procmailrc: PATH=/usr/bin:/usr/local/bin MAILDIR=$HOME/.mailspool DEFAULT=$HOME/.mailspool LOGFILE=/dev/pts/0 SHELL=/bin/sh What could be the problem? I'm still pretty green with all the sendmail and procmail stuff as I'm primarily a developer.

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • How to install mod_ssl for Apache

    - by Nick Foote
    Ok So I installed Apache httpd a while ago and have recently come back to it to try setup SSL and get it serving several different tomcat servers. At the moment I have two completely separate tomcat instances serving up to slightly different versions (one for dev and one for demo say) my web app to two different ports; mydomain.com:8081 and mydomain.com:8082 I've successfully (back in Jan) used mod_jk to get httpd to serve those same tomcat instances to http://www.mydomain.com:8090/dev and http://www.mydomain.com:8090/demo (8090 cos I've got another app running on 8080 via Jetty at this stage) using the following code in httpd.conf; LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel debug <VirtualHost *:8090> JkMount /devd* tomcatDev JkMount /demo* tomcatDemo </VirtualHost> What I'm not trying to do is enable SSL I've added the following to httpd.conf Listen 443 <VirtualHost _default_:443> JkMount /dev* tomcatDev JkMount /demo* tomcatDemo SSLEngine on SSLCertificateFile "/opt/httpd/conf/localhost.crt" SSLCertificateKeyFile "/opt/httpd/conf/keystore.key" </VirtualHost> But when I try to restart Apache with "apachectl restart" (yes after shutting down that other app I mentioned so it doesn't toy with https connections) I continuously get the error; "Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration. httpd not running, trying to start" I've looked in the httpd/modules dir and indeed there is no mod_ssl, only mod_jk.so and httpd.exp. I've tried using yum to install mod_ssl, it says its already installed. Indeed I can locate mod_ssl.so in /usr/lib/httpd/modules but this is NOT the path to where I've installed httpd which is /opt/httpd and in fact /usr/lib/httpd contains nothing but the modules dir. Can anyone tell me how to install mod_ssl properly for my installed location of httpd so I can get past this error:

    Read the article

  • General Overview of Design Pattern Types

    Typically most software engineering design patterns fall into one of three categories in regards to types. Three types of software design patterns include: Creational Type Patterns Structural Type Patterns Behavioral Type Patterns The Creational Pattern type is geared toward defining the preferred methods for creating new instances of objects. An example of this type is the Singleton Pattern. The Singleton Pattern can be used if an application only needs one instance of a class. In addition, this singular instance also needs to be accessible across an application. The benefit of the Singleton Pattern is that you control both instantiation and access using this pattern. The Structural Pattern type is a way to describe the hierarchy of objects and classes so that they can be consolidated into a larger structure. An example of this type is the Façade Pattern.  The Façade Pattern is used to define a base interface so that all other interfaces inherit from the parent interface. This can be used to simplify a number of similar object interactions into one single standard interface. The Behavioral Pattern Type deals with communication between objects. An example of this type is the State Design Pattern. The State Design Pattern enables objects to alter functionality and processing based on the internal state of the object at a given time.

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • NGinx Best Practices

    - by The Pixel Developer
    What best practices do you use while using NGinx? try_files in Subdirectory Credits go to Igor for helping me with this one. location /wordpress { try_files $uri $uri/ @wordpress; } location @wordpress { fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_param SCRIPT_FILENAME /var/www/wordpress/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } Normally PATH_INFO would include the "/wordpress", so we use the "split_path_info" command to grab the part of the URI after "/wordpress". This allows us to wordpress with and without the index.php file.

    Read the article

  • Puppetmaster don't notice changes to site.pp

    - by tore-
    Hi, I've just setup a new production environment with puppet. Using 0.25.4 in client/server. Ruby is at 1.8.5, CentOS 5.4. I've made a simple manifest for configuring yum-updatesd, but the puppetmaster doesn't seem to notice changes done to site.pp: err: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 Notice, it says line 1. But line 1 contains an import statement: # cat -n /etc/puppet/manifests/site.pp 1 import "update-notification" 2 3 node default { 4 include update-notification 5 update-notification::configure() 6 } I've tried to reboot the server, delete and recreate site.pp, start and stop puppetmaster and puppet, with no luck. What am I missing?

    Read the article

  • New ACS Resell Portfolio for OPN Members

    - by swalker
    Oracle Advanced Customer Support (ACS) Services is pleased to announce availability of the ACS Resell Portfolio to Oracle PartnerNetwork (OPN) members on June 28, 2012. The ACS Resell Portfolio is available to Gold level OPN members and above selling to end users with valid Oracle Premier Support/End User agreements, and in countries where ACS has a local in-country presence to support the partner business. ACS provides mission critical support services for complex IT environments to help maximize performance, achieve higher availability, and reduce risk. The ACS Resell Portfolio can be leveraged to reduce time to market and drive improved end user satisfaction. Including ACS services at point of license sale can maximize your success as an Oracle partner.     On July 10, 2012, Oracle ACS is hosting a 60-minute resell portfolio training session.  Topics include: ACS Resell Portfolio objectives   Partner participation requirements ACS portfolio services enabled for partner resell ACS sales engagement and transaction processes Contracting requirements Attend the following session to hear how you can maximize your profit opportunities by including ACS services, which compliment your solutions with integrated Oracle advanced support technologies.      July 10, 2012 4:00 PM CEST Webconference Session Number: 591 988 820 Session Password: ebh12345 Int’l: 706.501.7506 US: 866.589.6202 Call ID: 95867658 Click here for a list of toll-free international numbers. Please contact [email protected] with any questions or visit the ACS website.

    Read the article

  • OSX Lion - sh: mysql command not found

    - by mkk
    I have pretty simple issue I believe. I have successfully installed mysql etc. I do not have any problems with running "mysql" command from terminal [ bash ]. Suddenly, when my application tried to run mysql command I got error: sh: mysql: command not found.. In terminal, when I type sh and then mysql I can log in to mysql without any problems. I have added mysql to PATH in .bash_profile and I suspect this is why sh cannot see it. I have cp .bash_profile to .profile but it did not do the trick, any ideas how can I fix this issue?

    Read the article

  • How to install specific version of MySQL?

    - by user85569
    I installed from the repo, 5.0.77... including setup of PowerDNS (and the backend for MySQL). I tried setting up replication from my Master (which is MySQL 5.1.53) but it didn't work even though there were no errors, nothing got replicated. So the last resort is to try the same MySQL version on both the master and the slave (nb, only the slave has pdns installed) How would I go about installing MySQL 5.1.53? I tried downloading the rpm from MySQL (obviously the wrong one, didn't even include the mysql command to shell into the databases), but in turn fucked up the dependencies for pdns' mysql backend. I have the atomic repo which will install MySQL 5.5 (both on my Master server and Slave), but I don't want to do a major upgrade on the master right now as it's in production. Would love some advice!

    Read the article

  • Directory name for non-generic Proprietary stuff

    - by George Bailey
    Is there a common or standard directory name for the company-specific stuff that exists in a server? This would include any crons, scripts, webserver docroots, programs, non-database storage areas, service codebases, etc. We could of course put crons in /etc/cron.d, put docroots in /home/webservd, scripts in one of the bin directories, but that would be messy. If XYZ Technology Corp wanted to have all the non-generic stuff in one place, would they make a directory /xyz or /home/xyz or is there an alternative directory name that is not company-specific, but intended for company-specific stuff? What is most common?

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >