Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 389/667 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Apache port forwarding with ZTE ZXV10 W300 router (provider specific firmware)

    - by dannote
    I'm trying to configure port forwarding for Apache 2.2 installed on Windows XP SP3 with ZTE ZXV10 W300 router. The computer has a static IP 192.168.1.2. Port forwarding is configured as following: Enable true Name Apache Protocol TCP (also tried TCP and UPD) WAN Host Start IP Address empty WAN Host End IP Address empty WAN Connection stream WAN Start Port 8080 WAN End Port 8080 LAN Host IP Address 192.168.1.2 LAN Host Start Port 8080 LAN Host End Port 8080 Port 8080 is open for both TCP and UPD in Windows Brandmauer. Apache configuration: Listen 192.168.1.2:8080 Router Firmware: Hardware Version V1.0.01 Software Version V8.0.02T03_CFA Boot Loader Version V1.1.2 The provider is COMSTAR. I'm not sure but it's said they flash routers with modified firmware. I have also tried to set up Bitcomet port forwarding on port 13514 and failed.

    Read the article

  • Windows Server 2012 VPN Server on AWS VPC EC2 Instance

    - by abran
    I'd like to use window server 2012 VPN on a AWS VPC EC2 instance. The VPC has one public subnet and the EC2 instance has one network adapter. I've taken the following steps, but have been unsuccessful; am I missing a step or configuration? Thanks. Configured an elastic IP for the VPC Enabled protocols 47, 50, & 51 Added the RRAS role to the (EC2 instance) server Configured the RRAS for vpn only. Note: I'm able to RDP to the EC2 instance, but not able to ping the external IP.

    Read the article

  • Need help setting up mail DNS records

    - by Dave
    Hi, We are hosting our web site on host monster, but want our email to continue to be hosted at the old site. Our domain points to the hostmonster DNS servers, but I can't figure out the right configuration for the remote email servers. We have one MX entry, which is priority: 0 domain: ourdomain.com And then we have these DNS entries ... name: mail.ourdomain.com ttl: 14400 class: IN type: A record: old.host.ip.address name: mail1.ourdomain.com ttl: 14400 class: IN type: A record: old.host.secondip.address Can someone tell me what I need to add/edit to get mail to correctly route to our old host? Thanks, - Dave

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • How to install Citrix receiver xubuntu 13.04 64-bit

    - by Bård S
    Anyone have a walkthrough on installing Citrix receiver on Xubuntu 13.04 64-bit? Update $ sudo apt-get install libmotif4 nspluginwrapper ... snip ... Setting up libmotif4:amd64 (2.3.3-7ubuntu1) ... Setting up nspluginviewer (1.4.4-0ubuntu5) ... Setting up nspluginwrapper (1.4.4-0ubuntu5) ... plugin dirs: nspluginwrapper: no appropriate viewer found for /usr/lib/flashplugin-installer/libflashplayer.so Auto-update plugins from /usr/lib/mozilla/plugins Looking for plugins in /usr/lib/mozilla/plugins Segmentation fault (core dumped) Processing triggers for libc-bin ... ldconfig deferred processing now taking place sudo dpkg --install Downloads/icaclient_12.1.0_amd64.deb Selecting previously unselected package icaclient. (Reading database ... 155808 files and directories currently installed.) Unpacking icaclient (from .../icaclient_12.1.0_amd64.deb) ... dpkg: dependency problems prevent configuration of icaclient: icaclient depends on libc6-i386 (>= 2.7-1); however: Package libc6-i386 is not installed. icaclient depends on ia32-libs; however: Package ia32-libs is not installed. icaclient depends on lib32z1; however: Package lib32z1 is not installed. icaclient depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing icaclient (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient

    Read the article

  • Setting lusca and dansguardian iptables on Ubuntu 12.04 to prevent loop

    - by Heri YT
    I have a server with ubuntu 12:04 operating system, which runs as a proxy cache server lusca and DansGuardian as well as internet content filter. With the following composition: the client browser - lusca - DansGuardian - internet. And all this running only on one machine only, the following is a partial configuration on my server lusca: http_port 3128 transparent cache_peer 192.168.0.1 parent 8080 0 no-query no-digest no-netdb-exchange default which is also only found on the DansGuardian default settings namely: filterip="blank" filterport=8080 proxyip=192.168.0.1 proxyport=3128 The question is: Can all goes well? By simply relying on one machine only? What causes the "WARNING: Forwarding loop detected for:"? is not problematic if we leave? How to solve "WARNING: Forwarding loop detected for:" found in / var / log / lusca / cache.log Thank you.

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Why doesn't update-manager allow me to upgrade distribution?

    - by spoulson
    I have an Ubuntu 9.04 PC behind a corporate firewall and proxy server. This requires that in order to get update-manager to fetch and apply updates, I must set the proxy and authentication settings in the Synaptic network configuration. Once done, I can check for updates and things work smoothly (except I don't get popup notifications of new updates, must manually check periodically). However, distribution updates just don't show up in update-manager, such as the newly released 9.10 Karmic Koala. I had the same issue in upgrading 8.10 to 9.04 and solved it by downloading and upgrading from the 9.04 ISO. What do I need to do to upgrade to 9.10 using the standard update-manager UI?

    Read the article

  • Disabling a touchpad in Windows

    - by Shamaoke
    I want to permanently disable a touchpad in Windows running on my laptop, since firstly, I use a mouse, and secondly, when I work, I often touch it accidentally what causes definite inconviniences. I tried the following approaches to disable it but with no result: Turned it off through BIOS — there's no such function there; Turned it off through the touchpad configuration utility — there's no such function there; Turned it off through the device manager — the disable button is inactive; Turned it off through a hotkey (Fn + F1 in my case) — the hotkey doesn't work; Uninstalled the proprietary driver — Windows automatically downloaded the standard driver; Uninstalled the standard driver and turned off the automatic driver download function (Win + R ? "systempropertiesadvanced" ? Hardware ? Device Installation Settings) — all the same, Windows downloaded and installed the driver. How can I disable a touchpad? Windows 7; the Alps touchpad.

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • ubuntu-10.04-desktop-i386 does not work with HTTP preseed?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.

    Read the article

  • Become an Oracle Solaris 11 Certified Implementation Specialist!

    - by uwes
    Have you heard about one of the newest certifications from Oracle, the Oracle Solaris 11 Certified Implementation Specialist? If you already have a background in Oracle Solaris, have some previous UNIX knowledge, or are working with or for an Oracle Partner that’s pursuing Oracle Solaris 11 Specialization, then you may be interested in the many different ways to gain this highly valued industry certification. An Oracle Certified Implementation Specialist is recognized as capable of installing, configuring, and implementing Oracle Solaris 11 on enterprise class SPARC and x86 systems. This certification is highly valued by Oracle customers and partners alike, since you will have obtained an updated skill set on the newest and most powerful operating system release from Oracle which will set your company apart. If you’ve already achieved an industry certification in Solaris then you’re just a few steps away from becoming an Oracle Solaris 11 Certified Implementation Specialist. Also, if you’re new to Oracle Solaris, we have a path for you too. Listed below are some of the many options Oracle offers in delivering training the way you need it to help you achieve your goal of being recognized as an Oracle Solaris 11 Implementation Specialist. Which path best describes you? New to UNIX but want/need to achieve Certified status? Training Paths: Oracle Certified Associate, Oracle Solaris 11 System Administrator Exam: 1Z0-821 – Oracle Solaris 11 System Administration Certified on an earlier version of Solaris and want full Administration Certification? Recommended Training class: Transition to Oracle Solaris 11 Exam: 1Z0-820 – Transition to Oracle Solaris 11 Certified on an earlier version of Solaris and want the partner based Implementation Certification? Recommended Training Path: OPN Guided Learning Path Exam: 1Z0-580 – Oracle Solaris 11 Installation and Configuration Essentials Get Started Today!

    Read the article

  • How to deal with a 'public' work environment?

    - by Craige
    In the last 6 months, I have changed desks at my office 4 times. I don't mind, as it's due to expansion of the company and acquiring new office space and getting everybody settled. However, I truly miss the semi-private office I sat in 2 desks ago. I am now sitting in a large room with a number of other people. My problem with this isn't with my co-workers; everybody here is great. My problem is that based on the configuration of the room, no matter which desk I sit in, my monitors WILL be facing an open window. This causes a glare on my monitors, and it drives me crazy. I prefer a dark IDE theme as I find it easier on the eyes, however this just makes the glare that much worse. How should programmers cope with public office settings? Secondly, how should I cope with my specific problem? Should I give in and adopt a light IDE theme that will reduce the visibility of the glare but increase eye strain, or should I stick to my guns and find another solution?

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • Centos 6.5 -- backported upgrades/php.ini directives included in php 5.3.3

    - by Decave
    PHP 5.3.3 is the latest version of PHP available with the official CentOS 6.5 repos. As most of you know, calling it version '5.3.3' is slightly deceptive because critical bug fixes are actually back ported into version 5.3.3, so in effect 'version 5.3.3' does get upgraded now and then. My question is: aside from manually toggling directives in php.ini, how can you tell which new directives, that were implemented in, and officially supported by, later versions of PHP, are also available in CentOS 6.5's backported PHP 5.3.3? For example, max_input_vars (http://php.net/manual/en/info.configuration.php#ini.max-input-vars) has been available since PHP 5.3.9. IS there an easy way to tell whether CentOS included this in a backported upgrade to 5.3.3? Thanks!

    Read the article

  • Configuring a Jetty web application on a different port

    - by sHz
    Hi folks, I'm brand new to Jetty. I'd like to ask if its possible to have Jetty listening on port 8080, however where specified, serve a specific web application under say /var/jetty/webapps/<appname> (default on CentOS) served on say port 10000 instead of http://localhost:8080/<appname> i.e. http://localhost:10000/ = http://localhost:8080/<appname&gt; ? If so, what configuration changes would be required to make this work without an additional proxy server? I've googled away, but haven't found a solution (perhaps I've missed something obvious?).

    Read the article

  • Unable to modify variables phpmyadmin via variables tab (Xampp)

    - by rookie coder
    I am quite new to phpmyadmin configuration. I had project where utf8 encoding is needed. What i'm trying to do is to change the variables text/char all into utf8. I changed, yes at that moment the values changed into values I wanted. But then when I terminate Xampp and reenters phpmyadmin page or even refreshing the page, all the values restored to default (original values). My phpmyadmin had default user as root and hadn't been set a password yet. There is also no logout button in phpmyadmin landing page. I had difficult time even to set the server connection collation (hangs indefinitely and never seems can be updated). phpmyadmin version:4.1.6 mysql:5.5.36 (latest version) I doubt this could be due to malformed installation, because same things happened in my other computer too (exactly the same versions). what could be wrong? thanks.

    Read the article

  • IIS reveals internal IP address in content-location field - fix

    - by saille
    Referring: http://support.microsoft.com/kb/q218180/, there is a known issue in IIS4/5/6 whereby it will reveal the internal IP of a web server in the content-location field of the HTTP header. We have IIS 6. I have tried the fix suggested, but it has not worked. The website is configured to send all requests to ASP.NET, and I am wondering if this is why the fix, which addresses IIS configuration, has not worked for us. If this is the case, how would we fix this in ASP.NET? We need to fix this issue in order to pass a security audit.

    Read the article

  • Should I consolidate multiple identical VMs into BSD jails?

    - by Josh
    We run a number of Openfire XMPP/Jabber servers. Due to the way Openfire works, we cannot easily run multiple Openfire instances on one server, so I have 5 identical VMware ESXi VMs, each with CentOS, MySQl, Java, and Openfire. They're the exact same, except for their IP addresses, the actual Openfire MySQL database and it's config file. I am wondering if this is the optimal configuration, or if it would be better to move these VMs to a single FreeBSD machine and put each one inside a FreeBSD jail. Specifically, I am wondering if the benefit of VMWare's Transparent Page Sharing (TPS) would outweight the cost of running 5 identical OSes. Would I end up using less memory with one large FreeBSD machine and java running in bsd jails?

    Read the article

  • Varnish 3.0.2 to Apache2 sometimes return error 503

    - by Ronnie Jespersen
    Hey guys I hope you can help me out here. I have an Ngingx parsing http and https to a varnish cache(3.0.2). From the varnish it is sent to apache2. Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 Now the backend server (apache) does not have any 503 error in the access log at this point. So I am confused. Is this varnish throwing a 503 because it thinks apache is to slow? There is a lot traffic coming through at this point so I know the server is up and running. I do have other 503 error codes with posts and gets so there is really no pattern. It seems to be at random times and random requests. Even in the morning when the server dosen't seem to be doing anything. I do see another pattern in the log: 4 VCL_call c recv pass 4 VCL_call c hash 4 Hash c /?id=412 4 VCL_return c hash 4 VCL_call c pass pass 4 FetchError c no backend connection 4 VCL_call c error deliver 4 VCL_call c deliver deliver Here fetcherror says "no backend connection". A summery of the FetchErrors in todays log: 16 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 16 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 21 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c no backend connection 4 FetchError c no backend connection 20 FetchError c http first read error: -1 11 (No error recorded) 39 FetchError c http first read error: -1 11 (No error recorded) I haven't changed the default timeout values for varnish. This is my configuration for one of the backend servers. backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } I'm running prefork module on apache2 with this configuration <IfModule mpm_prefork_module> StartServers 1 MinSpareServers 2 MaxSpareServers 5 MaxClients 200 MaxRequestsPerChild 75 </IfModule> and only PHP files is sent to the server. Every other static file is handled by Nginx. Any ideas? ------- EDIT -------------- Some more debuging information I have run a varnishadm debug.health Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy And I have been monitoring varnishstat from the two load balancers 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry Notice that none of them have reported backend_fail. /Ronnie

    Read the article

  • DON'T MISS: Live Webcast - Nimble SmartStack for Oracle with Cisco UCS (Nov 12)

    - by Zeynep Koch
    You are invited to the live webcast with Nimble Storage, Oracle and Cisco where we will talk about the new SmartStack solution from Nimble Storage that features Oracle Linux, Oracle VM and Cisco UCS products. In this webinar, you will learn how Nimble Storage SmartStack with Oracle and Cisco provides a converged infrastructure for Oracle Database environments with Oracle Linux and Oracle VM. SmartStack, built on best-of-breed components, delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or Oracle Real Application Clusters (RAC) on multiple nodes.  When : Tuesday, November 12, 2013, 11:00 AM Pacific Time Panelists: Michele Resta, Director of Linux and Virtualization Alliances, Oracle John McAbel, Senior Product Manager, Cisco Ibby Rahmani, Solutions Marketing, Nimble Storage SmartStack™solutions provide pre-validated reference architectures that speed deployments and minimize risk.      The pre-validated converged infrastructure is based on an Oracle Validated Configuration that includes Oracle Database and Oracle Linux with the Unbreakable Enterprise Kernel.     The solution components include a Nimble Storage CS-Series array, two Cisco UCS B200 M3 blade servers, Oracle Linux 6 Update 4 with the Unbreakable Enterprise Kernel, and Oracle Database 11g Release 2 or Oracle Database 12c Release 1.     The Nimble Storage CS-Series is certified with Oracle VM 3.2 providing an even more flexible solution leveraging virtualization for functions such as test and development by delivering excellent random I/O performance in Oracle VM environments. Register today 

    Read the article

  • Grub options are not visible on booting on Samsung ATIV Book 9 Lite running Ubuntu 14.04

    - by mjwittering
    I've managed to install Ubuntu 14.04 on my new Samsung ATIV Book 9 Lite ultrabook. After updating some configuratiosn in the UEFI installation was very easy. The only questions and issue I believe I'm still experience is when booting. I believe when the laptop would be displaying the grub boot options I see the following. There is a black screen with a purple border of 10px around the screen. I'd like to know how I can update my system so that I see the grub boot manager. I've run these commands: sudo cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" The command was not possible, sudo efibootmgr.

    Read the article

  • Hosting custom domains with IP address flexibility

    - by F21
    I am building a small service where users will be assigned a subdomain such as: myusername.myservice.com anotheruser.myservice.com I know that I can set up a wildcard vhost and using some configuration regex, serve the files like so: myusername.myservice.com ===> /var/www/myusername anotherusername.myservice.com ===> /var/www/anotherusername The problem is that I would like to allow users to alias their own domain names to their service. I understand that for the webserver, once the user adds the domain via my web interface, I can easily create a vhost for the domain in nginx and then refresh the webserver. The problem is that I would prefer to NOT let the users add an A record of my webserver's IP address as I would prefer to keep things flexible (when we upgrade our infrastructure to something more complex to scale). What is the best way to achieve this?

    Read the article

  • Ideas for SVN/SQL/PHP/Linux Dev Enviroment Supporting Multiple Isolated Environments?

    - by jpganz18
    I am trying to create a "dev" for my users. In that environment they would access to their own account of PHPMyAdmin, SQL, Subversion and FTP which is not a big problem, but I would like to emulate like if each one would be in their own server. I mean so that they could change the PHP configuration (for example) and would be done only in its own environment. Any idea how to do this? Do I have to make something "special" at the installations of my server or something like that?

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >