Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 174/389 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Is this a DNS or server-side error?

    - by joshlfisher
    I am having difficulty accessing a specific website. (I get 500 Server fault errors) I can access this site on my iPhone when NOT connected to WiFi. I CANNOT access the site when connected to WiFi or via a Ethernet connection to my home network. I thought it might be a DNS issue, so I copied the DNSservers from a friend who has a different ISP, and has no problem access the site. No luck. Also tried some of the public DNS servers out there, again, with no luck. Does anyone have any idea on how to trace this issue?

    Read the article

  • Upgrade from 9.10 to 11

    - by Fernando Costa
    I have an installation in my machine (my version is 9.10 Karmic) and I got a warning to Upgrade to a version 10.04, to me it is okay, and I would like to upgrade to the 10.04, but here is my question. If I do, what will happen to my system of files? All my files? My programs, my apache configuration? All my servers.. Does everything reset to default? Will I lost all my data? Because if YES, will lost everything, why such a warning appears to me? Then the best solution is, format everything and install a brand new ubuntu version 11. Otherwise I still using 9.10 Karmic version,and just update normally as I'm required.. What is the best to do on this situation? Appreciate any help!

    Read the article

  • How to revoke gnupg public key without private key?

    - by danijelc
    Long story short I have an key generated with seahorse and mistakenly deleted it from my system. I do remember passphrase but I don't have this key anywhere on my system. Scanned trough Ask Ubuntu but couldn't find any aplicabile solution on similar issue. However public key is still updated on keyring servers and I would like to revoke it. Since I have no revocation certificate and I can't get hold of private key (only public key is available from keyservers which I imported to seahorse) I have no idea how to accomplish it. Spent some time searching for solution acros net, various manuals and so on, but so far no luck. gpg --list-secret-keys - returns no output at all. gpg --list-keys - returns public key info gpg --gen-revoke *user-id* - returns - gpg: secret key *user-id* not found: eof gpg (GnuPG) version 1.4.11. Anyone able to suggest a solution?

    Read the article

  • Test case as a function or test case as a class

    - by GodMan
    I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?

    Read the article

  • Does redirect popup window affect SEO?

    - by Joseph
    We have multiple websites, each site servers number of countries, and we used to have Geo-Ip Auto redirect system (no one likes auto-redirect), so we implemented another redirect system also uses Geo-IP database, but showing a pop-up window (HTML layer pop-up, so it can't be rejected), this window asks the visitor if he would like to continue with this page or go to the correct website of his country. We also added a test line before showing the pop-up, so if the visitor is Googlebot, the popup will not show up :). I was wondering if this effects our websites SEO?

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • How to configure apache / php / postfix website emails when using vhosts?

    - by Alistair Buxton
    I have a LAMP webserver configured to serve multiple websites. Each virtual host has various PHP applications, mainly Wordpress. When users sign up to the Wordpress sites, email is sent by PHP through to postfix, and then on to the receiver. The problem is that postfix is identifying itself to the remote server with the contents of /etc/hostname, which is not a fully qualified domain name. Some mail servers reject this and the mail bounces. Additionally, the return path is being set to one of the vitual host domains, seemingly at random. I could set /etc/hostname to one of the website domain names, but then the emails from the other websites would have a wrong server in the headers, and this would not fix the return-path issue. Possibly related, apache2 says "could not determine the server's fully qualified domain name" on startup. How do I fix this so that each website can send email without revealing the other websites hosted on the server?

    Read the article

  • U1 music mp3 files not put into albums

    - by david
    Via the web page I can see that my files sync to U1 cloud servers. For the mp3 files, there seems to be a problem that several questions have already addressed but there does not seem to be a clear answer. If I use EasyTAG 2.1.6, I can see the ID3 tags on the local files and they seem to correctly define the artist, album title and track name. I expect it is not relevant, but I am using 10.04 with several different clients to rip the CDs. However, some mp3 files do not appear in the cloud at all and some others get assigned to Various Artist or Unknown artist. Does the music streaming (e.g. via Ipad) use the tags or the directory/file structure to assign the artist or album, and how quickly should it be expected to work? :-) Which version of ID3 tags does U1 music streaming work best with or prefer? thanks for any help David

    Read the article

  • Set up development site on another server/host

    - by Ofeargall
    I'm developing a site for a client. They've got a site now that's hosted at hosting.com. I'm going to move them to my VM hosting solution at edge web but I want to run some tests and have the client approve the site before changing the name servers to the new site/hosting location. How do I make this happen? I'm running a red hat/Apache on linux for the edge web hosting. I don't have control of the domain name (i.e. the client controls that right now). Edgeweb has set up a dns zone for the domain name so that when the time comes to switch we're ready to go. I'm a web developer and I understand the technologies that make a user experience 'work' but I'm unfamiliar with the server jargon and all that so, please be patient. Thanks in advance.

    Read the article

  • What is the SEO impact of moving my domain to another IP address and what is the right way of doing this?

    - by ElHaix
    I am planning to move several websites to a new hosting provider - keeping the same URL but will resolve to different IP addresses. For example, some sites are Canadian content-only sites, hosted on .CA domains sitting on Canadian IP addresses. I want to move these to Amazon servers which have US IP addresses. The domain names will remain the same. (1) What is the SEO impact of this? (2) Will the site lose some ranking if the sites are moved to a new IP address (Canadian or not), and if so, what is the cleanest way of accomplishing this (some kind of 301's)?

    Read the article

  • Using multiple A-records for my domain - do web browsers ever try more than one?

    - by Jonas
    If I add multiple A-records for my domain, they are returned in a round robin order by DNS-servers. E.g: 1.1.1.1 A example.com 1.1.1.2 A example.com 1.1.1.3 A example.com But how does webbrowsers react if the first host (1.1.1.1) is down (unreachable)? do they try the second host (1.1.1.2) or do they return a error message to the user? Are there any difference between the most popular browsers? If I implement my own application, I can implement so that the second is used in case the first is down, so it's possible. And this would be very helpful to create a fault tolerant website.

    Read the article

  • How do I import Amazon MP3s with Banshee and the new Amazon Cloud Player?

    - by adempewolff
    Banshee's Amazon MP3 Import extension until recently allowed seamless importing of songs purchased from Amazon MP3. It did this by a)opening .amz files and using them to connect to and download the purchased files from Amazon's servers, and b) using hooks in Banshee's built-in browser to automatically recognize and open the .amz files when clicked on in the browser. However, recently this functionality stopped working. Banshee will display Contacting Server in the lower left hand corner for a little while and then stop. Furthermore opening the Amazon Cloud Player in the Banshee browser or any other browser on a Linux system to manually download the .amz file now results in the message: On Linux systems, Cloud Player only supports downloading songs one at a time. To download your music, deselect all checkboxes, select the checkbox for the song you want to download, then click the "Download" button. How can I get around this and import my purchased music into Banshee as I used to?

    Read the article

  • Switch encoding of terminal with a command

    - by Tomas Lycken
    One of the servers I quite often ssh to uses western encoding instead of utf-8 (and there's no way I can change that). I've started writing a bash script to connect to this server, so I won't have to type out the entire address every time, but I would like to improve this script so it also changes the encoding of the terminal window correctly. The change I need to do can be performed using the mouse by navigating to "Terminal"-"Set Character Encoding..."-"Western (ISO-8859-1)". Is there a terminal command that does the same thing, for the current terminal window/screen? To clarify: I'm not interested in ways of switching the locale of the system on the remote site - that system is administered by someone else, and I have no idea what stuff might depend on the latin-1 encoding there. What I want to do is to let this terminal window on my side switch character encoding to the above mentioned, in the same way I can do with my mouse and the menus.

    Read the article

  • Web Hosting Backup/Disaster Recovery Plan - Which Company?

    - by Harry Muscle
    I've been asked to look after consolidating all of our various company websites onto one host and also provide a disaster recover plan in case the chosen host goes down/out of business/etc. We're most likely going to go with HostGator as our chosen host, however, I'm not sure who to pick for our backup host. HostGator uses cPanel and has the functionality to provide regular full (ie: including configuration) backups of all the sites we host. Ideally I'm looking for a solution where we can provide these backups to another company and within a short period of time they restore all the sites onto their servers and we're back up and running. The whole disaster recover process has to be fairly straight forward from the point of view of what we need to do in case I am unavailable to assist in the disaster recovery process and no one else overly technical is available to assist (ie: take these backup files, send them to this company, and ask them to do this). Any suggestions on which company would be a good choice for this backup solution would be highly appreciated. Thanks, Harry

    Read the article

  • Should I rely on externally-hosted services?

    - by Mattis
    I am wondering over the dangers / difficulties in using external services like Google Chart in my production state website. With external services I mean them that you can't download and host on your own server. (-) Potentially the Google service can be down when my site is up. (+) I don't have to develop those particular systems for new browser technologies, hopefully Google will do that for me. (-) Extra latency while my site fetch the data from the google servers. What else? Is it worth spending time and money to develop my own systems to be more in control of things?

    Read the article

  • How to use Fixedsys in the Gnome Terminal, or wherever monospaced fonts are required

    - by Walter Tross
    I think that the Fixedsys font is one of the most readable monospaced fonts for programming. It has zero antialiasing, with vertical lines mostly 2 pixels wide. Close to ideal for current monitor dot pitches, in my eyes (literally). After years of Windows at home (for family reasons) and Linux servers at work accessed through Cygwin on Windows (for company policy reasons), with Fixedsys as the shell and IDE font, I have decided to switch to the Ubuntu desktop. Eclipse and gedit are no problem, they accept the Fixedsys Excelsior TTF font. But the Gnome Terminal only accepts monospaced fonts. Although Fixedsys Excelsior is essentially monospaced, it contains larger glyphs (mostly for eastern languages), and also some ligatures. Since apparently ALL characters must have the same width for a font to be recognized as monospaced, Fixedsys Excelsior cannot be selected in all those contexts where monospaced fonts are required, including gnome-terminal. So what is the easiest/cleanest way to use a Fixedsys clone font in contexts that only accept monospaced fonts?

    Read the article

  • Creating a remote management interface

    - by Johnny Mopp
    I'm looking for info on creating a remote management interface for our software. This is not anything illicit. Our software is for live TV production and once they go on-air we can't access the PC (usually through LogMeIn). I would like to be able to upload/download files and issue commands to our software. The commands would be software specific like "load this file" or "run this script" or "return this value" etc. A socket connection is preferred but the problem is most of our PCs are behind firewalls and NAT servers. I'm not sure where to start. I think HTTP tunneling is the way to go but am wondering if there are other options or recommendations. Also, assume our clients are not willing to open up ports for security reasons. Thanks.

    Read the article

  • Access local email stored on worstation on laptop on lan

    - by crafter
    I have the following scenario with my email : I am using Evolution as my primary email client on my workstation. The evolution mail is downloaded from my mail servers using POP, then deleted from the server. When I am mobile, I access my email on my email server using webmail. My laptop is my primary computer thesedays. The workstation is hardly used. When I am mobile, I am restricted to new email that has not been downloaded onto the workstation I am now looking for a way to access my email from my workstation on my laptop, amlost as if my workstation is my second level email server/ I tried evolution on X display but attachments will browse on my workstation (not ideal as most docs are on my laptop). I am open to changing mail client or installing a service on my workstation. What would be the best way to address this requirement?

    Read the article

  • What incidentals do server maintenance people/devs need?

    - by SeniorShizzle
    I'm trying to put together a thank-you package for clients who are server-side developers and in charge of their company's servers and databases. Since I've never been in that line of work before, I would like to know what it is like. Get inside your heads, for instance. My first thought was (don't hate me) a few patch cables. I don't know if you guys actually need these often or anything. What are some similar things that you like or need for your lives at work. Small incidentals less than $10 are preferred, like Coffee or a notebook, screwdrivers. Et cetera.

    Read the article

  • Looking for a simple to use email server that can be programmatically (preferably remotely) used

    - by sr2222
    I've been poking around the internet for much of the day, but I can't seem to find a good server to fit my needs. What I need is a simple to use and deploy (pref open source) lightweight email server that I can create users on programmaticly that has IMAP or POP support. I'd prefer something with an existing service interface, but if I have to write a REST API on top of an easy to use API, that's acceptable. The purpose of this tool will be to allow a test automation framework to create new email accounts and retrieve email sent to those addresses. I need text, html, and possibly attachment support as well. Perhaps it's my noobishness, but I can't really suss out the details from the documentation on the servers available out there to figure out which fit my needs.

    Read the article

  • data maintenance/migrations in image based sytems

    - by User
    Web applications usually have a database. The code and the database work hand in hand together. Therefore Frameworks like Ruby on Rails and Django create migration files Sure there are also servers written in Self or Smalltalk or other image-based systems that face the same problem: Code is not written on the server but in a separate image of the programmer. How do these systems deal with a changing schema, changing classes/prototypes. Which way do the migrations go? Example: What is the process of a new attribute going from programmer's idea to the server code and all objects? I found the Gemstone/S manual chapter 8 but it does not really talk about the process of shipping code to the server.

    Read the article

  • Does Google treat AWS IP addresses as related?

    - by ElHaix
    We are hosting several websites on one of our servers, and wondering if because they are on the same subnet that they have been somehow penalized. We are not inter-linking between websites. However in an attempt to have everything hosted in AWS, we will have some sites that we do want to be interlinked. If the sites resided on the same subnet, this could be bad. However, with AWS, we can allocate multiple elastic IP addresses that do reside on different subnets. How does Google deal with this?

    Read the article

  • IPSec Offload support in 82576GB controller for Linux

    - by Rodrigo Leal
    Due to migration of servers to cloud computing, we bought several NICs that support mechanisms like SRIOV and VMDQ. Furthermore, as security risk was also a concern and we did not want to create more overhead on the processor, IPSec Offload support was essential. The model chosen was: Intel Gigabit ET2 Quad Port Svr Adptr. (With 82576GB controller): http://ark.intel.com/products/49187/intel-gigabit-et2-quad-port-server-adapter However, we were unable to configure IPSec Offload on Linux. We tried to test on another server we have, a Windows Server 2012 R2, but again without success. It seems that the driver for this controller is not available for windows server 2012 R2, and Linux. The test on windows would be only for verification purposes, we will not use this platform. Could someone confirm this lack of support for Linux?

    Read the article

  • How to recover a website's lost robot.txt?

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >