Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 329/627 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • NginX & PHP-FPM, random 502.

    - by pestaa
    2010/09/19 14:52:07 [error] 1419#0: *10220 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [...], server: [...], request: "POST /[...] HTTP/1.1", upstream: "fastcgi://unix:/server/php-fpm.sock:", host: "[...]", referrer: "[...]" This is the error I'm receiving randomly. 95% of the time my setup works perfectly, but once in a while I'm getting 502 for 3-4 subsequent requests. I'm using Unix socket between the server and the PHP process as you can see, also have set up FastCGI params (SCRIPT_FILENAME), etc. correctly. What can I do about it to strengthen the connection between these services? Thank you very much in advance.

    Read the article

  • Failed to configure CA certificate chain

    - by kron
    Hi All, I'm trying to setup SSL on fedora with apache. In my vhost... SSLCertificateFile /your/path/to/crt.crt SSLCertificateKeyFile /your/path/to/key.key SSLCertificateChainFile /your/path/to/DigiCertCA.crt I had it working fine with a self signed key, but can't get it to work with the DigiCertCA crt. When I run service httpd restart It fails to start. This is what I get in the logs... [Sat Jan 29 07:57:13 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suex$ [Sat Jan 29 07:57:13 2011] [error] Failed to configure CA certificate chain! Any assistance would be really appreciated! Thanks

    Read the article

  • X11 display over ssh with monitor connected to remote machine

    - by Sumit
    I have the following setup: Machine A (a.corp, 192.168.100.130, local machine) and Machine B (b.corp, remote machine) and a monitor is connected to each of these machines. When I ssh from a.corp to b.corp as $ ssh -X b.corp $ xclock Error: Can't open display: I tried setting the DISPLAY variable as $ ssh -X b.corp $ export DISPLAY=`echo $SSH_CLIENT|cut -f1 -d\ `:0.0 $ echo $DISPLAY 192.168.100.130:0.0 $ xclock xclock's display opens up but on the monitor connected to b.corp (remote machine) and not on the monitor connected to a.corp (local machine). Is there a way to force the display to appear on the monitor of the local machine (a.corp)?

    Read the article

  • Fedora vs Ubuntu to host Subversion and Bugszilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Fedora vs Ubuntu vs Debian to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs Debian vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora or Debian. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora or Debian? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora or Debian considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • Sharepoint 2010 and Samba LDAP groups

    - by Jon Rhoades
    The setup: Windows 2008 SP2 Sharepoint 2010 Foundation Samba 3 "Domain" I'm trying to use the Samba LDAP users & groups we already have to access to Sharepoint. I can successfully authenticate using the Samaba accounts (getting the "Error: Access Denied" message as the user has no permissions). So Sharepoint can clearly see and use the existing accounts/groups. What I can't do is be authorised as in the grant permissions interface, Sharepoint now fails to match the account (I get an "No Exact match found..."). Is there a way of getting the Sharepoint permissions interface to recognise and use our existing Samba LDAP accounts? I get it - don't use Samaba, use AD. If I had that option I would, but I don't.

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Is there a special way to set up Garry's Mod for Mac? [closed]

    - by credford
    I just downloaded Garry's Mod from Steam and every time I try to play Single Player on the first map, it crashes on the Loading Resources section of the progress bar. Here is some of the exception information the system prints out: Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000000001acec4d0 Crashed Thread: 4 When I ran it on the Windows side (Bootcamp), Windows 7 required me to give Garry's Mod some permissions. I wasn't asked to do this on the Mac side. Maybe that's it? Is there some kind of special setup for the Mac?

    Read the article

  • How to make ssh/rsync/etc use a VLAN network interface?

    - by Annan
    A company I work for has a number of virtual servers with ElasticHosts. They are setup in such a way that eth1 is on a private VLAN connecting them to each other. This is so backups sent between servers are not charged at the same rate as external data transfer. My understanding of how VLANs and network interfaces work is sketchy at best. How can I make ssh, rsync, etc. transfer data through the VLAN? My final solution: I spent a while trying to figure this out, For all servers involved, edit /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=static ONBOOT=yes HWADDR=YOUR_MAC_ADDR IPADDR=192.168.0.100 NETMASK=255.255.255.0 Where HWADDR should already be set and the last octate of IPADDR should be different from each other. Then run, on all servers /etc/init.d/network restart After this the IP addresses specified by IPADDR can be used directly as any other IP address.

    Read the article

  • Disk names in Solaris 10 for ZFS: SAS WWN instead of c0t0d0

    - by notpeter
    I currently have a server running Solaris 10u9 with a SAS enclosure (Dell PowerVault MD1000) filled with SATA disks attached to an SAS card (LSI 3801E). It happily recognizes the 15 disks in the MD1000 and presents them each disk in the traditional solaris form (c1t12d0, c1t13d0, c1t15d0, etc). My home ZFS setup (Nexenta CP3 + LSI 9200-16E + directly cabled SATA disks) presents disks as their SAS WWN ID (ex: c3t600039300001EA56d0). Although this ID is longer, I've found it much easier to troubleshoot because the cabling/slot is irrelevant, ZFS just identifies the disk by ID, if it's connected it finds it. Most manufacturers print the WWN right on the disk's top label, can't get much easier than that. So how can I get Solaris to identify disks by SAS WWN instead of by the cXtXdX?

    Read the article

  • Keepalived takes several minutes to recover in a particular situation

    - by NathanE
    I've setup Keepalived for a master-slave style virtual IP and it seems to work well. Both are hosted in almost identical VMs. If I "pause" the VM that is running the Master. The Slave will take over, as expected, almost instantly. However if I then "unpause" the VM that runs the Master. The virtual IP will stop responding the pings. And it takes a good 4 or 5 minutes for it to start pinging again. It seems to be getting desynchronised due to the nature of the way I'm testing it (by pausing/unpausing the VMs). I admit that pausing and unpausing VMs is a slightly dodgy way to test this. But it has raised a concern for me that there could be other scenarios that cause the same undesirable behaviour. Is this expected / by design? Is there anything I can do to the config to improve it? Thanks.

    Read the article

  • Netboot Intel Macs without BSDP

    - by notpeter
    I have a netboot setup with DeployStudio that works great in my lab, but doesn't work on our main network. After some digging, I believe it's because our network admins are filtering BSDP (Boot Service Discovery Protocol) on our subnet at the switch level. Is it possible to hard code which server my clients (early 2007 iMac Core2Duos) should boot from without relying on BSDP? Perhaps relevant details: I do not have control over switch configs or DHCP settings. Client and server are running 10.6 Snow Leopard. The clients see the netboot server advertising itself in the 'Startup Disk' system preferences pane, but when I go to netboot it just leaves me with a flashing globe.

    Read the article

  • ganglia graphs like munin for cpu, etc?

    - by CarpeNoctem
    I'm coming from munin and a CPU graph contains data for system, user, nice, etc ALL on one graph. I just installed ganglia and setup the basic monitoring. It appears that each type of cpu data is a separate graph! WTF is this and can I change the defaults to combine these into a single per host? That is my question, how do I combine cpu data into a single graph. Also, can I change the layout to something closer to munin's day-week side-by-side layout? I'm trying to be impartial and give ganglia a chance. ;)

    Read the article

  • Windows 2008 as home file server and more

    - by Christian W
    I currently have a freenas-unit as a NAS, and a Win2k8R2-unit as server. However I would like to consolidate these to units in one. What I really like about the freenas-unit is the ZFS filesystem. And the only reason I care about the ZFS filesystem is the easy way I can grow an existing filesystem just by inserting a new drive. How would this work in Win2k8? If I setup my unit with a separate drive as C: and a 1TB drive as D:. The D: would then be segmented into d:\Videos d:\Music d:\Pictures. When everything gets close to filling the storage-drive, I would like to expand the storage, but I wouldn't want to have E:\Videos or d:\Videos2 (using the NTFS folder mount thingy). I still want all my Videos to reside in D:\Videos and I want the OS to decide which drive it's going to be stored on... Some kind of on-the-fly jbod expansion :) Is this at all possible in Windows 2008?

    Read the article

  • SSIS Send Mail Task and ForceExecutionValue Error

    - by Kevin Shyr
    I tried to use the "ForcedExecutionValue" on several Send Mail Tasks and log the execution into a ExecValueVariable so that at the end of the package I can log into a table to say whether the data check is successful or not (by determine whether an email was sent out) I set up a Boolean variable that is accessible at the package level, then set up my Send Mail Task as the screenshot below with Boolean as my ForcedExecutionValueType.  When I run the package, I got the error described below. Just to make sure this is not another issue of SSIS having with Boolean type ( you also can't set variable value from xp_cmdshell of type Boolean), I used variables of types String, Int32, DateTime with the corresponding ForcedExecutionValueType.  The only way to get around this error, was to set my variable to type Object, but then when you try to get the value out later, the Object is null. I didn't spend enough time on this to see whether it's really a bug in SSIS or not, or is this just how Send Mail Task works.  Just want to log the error and will circle back on this later to narrow down the issue some more.  In the meantime, please share if you have run into the same problem.  The current workaround is to attach a script task at the end. Also, need to note 2 existing limitation: Data check needs to be done serially because every check needs to be inner join to a master table.  The master table has all the data in a single XML column and hence need to be retrieved with XQuery (a fundamental design flaw that needs to be changed) The next iteration will be to change this design into a FOR loop and pull out the checking query from a table somewhere with all the info needed for email task, but is being put to the back of the priority. Error Message: Error: 0xC001F009 at CountCheckBetweenODSAndCleanSchema: The type of the value being assigned to variable "User::WasErrorEmailEverSent" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. Error: 0xC0019001 at Send Mail Task on count mismatch: The wrapper was unable to set the value of the variable specified in the ExecutionValueVariable property.   Screenshot of my Send Mail Task setup:

    Read the article

  • Best configuration and deployment strategies for Rails on EC2

    - by Micah
    I'm getting ready to deploy an application, and I'd like to make sure I'm using the latest and greatest tools. The plan is to host on EC2, as Heroku will be cost prohibitive for this application. In the recent past, I used Chef and the Opscode platform for building and managing the server infrastructure, then Capistrano for deploying. Is this still considered a best (or at least "good") practice? The Chef setup is great once done, but pretty laborious to set up. Likewise, Capistrano has been good to me over the past several years, but I thought I'd take some time to look around and seeing if there's been any landscape shifts that I missed.

    Read the article

  • Active Directory Problem

    - by Ankur Dholakiya
    Hello All, I have one server 2008 installed with AD, SQL and IIS. Now I am trying to attach different HDD on this server only. I am able to install windows server 2008 r2 64bit on the server, but when I try to install the ActiveDirectory on the server the setup doesn't get completed and keep processing at following level. "Configuring Active directory and local host domains ......." If I attach same HDD on any other PC Active directory setups completes successfully. My server is Xeon quad core with 8GB of RAM. Can any one help the appropriate solution for this?

    Read the article

  • re: 3ware raid 10 (4drive) suggested stripe size suggestions?

    - by dasko
    looked around on the site but nothing really concrete on my question. i will have about 120GB of data total, files are made up of 5MB files, excel, word and about 25 .pst files that are about 1.2GB each. Yes they use .pst over network, even though it is not recommended this is legacy setup without issue so we will continue to support this for another year or so. I need to know what you think about a stripe size of 256kb for the raid 10 based on the above requirements. I did try and bench with these settings and it seems alright without any real issue, just trying to rule out anything i might of missed. thanks.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Modify HTML Content with Squid

    - by user38400
    We have setup our network as per the tutorial here: https://help.ubuntu.com/community/Upside-Down-TernetHowTo. Basically, we have a squid proxy that inverts images for pages that clients request. We're trying to modify the script so that we can edit the contents of the webpage before the webpage is sent to the client. We are not having any luck. I'm wondering if there is something different about .html files that makes this not possible. What is happening is that we do a wget on the URI that is requested, save it locally, modify it and then echo back the new URI. The page that the user gets is the unmodified page and not the one that we just changed.

    Read the article

  • Clustering and custom applications

    - by Ahmed ilyas
    I was not entirely sure what tags to put but hope this is ok. This is just a general question in regards to clustering and applications: so lets say we have a clustered environment setup. We cluster SQL Server (I dont know exactly how its done but lets just say its been done for the sake of argument). Now if a website or application is trying to access that database for read/write (say an ASP.NET app or a C# Winforms app) and during that time SQL goes down - it takes a couple of minutes for the clustering failover to take affect to switch to another node. What happens during this time? I think it will time out/unable to connect. BUT is there a way for it to place the request in some pipeline so when the cluster node is back up/switched over it will continue as normal? as you can see, I know nothing much about clustering! what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • Apache SSL losing session over load balancer

    - by SaltyNuts
    I have two physical Apache servers behind a load balancer. The load balancer was supposed to be set up so that a user would always be sent to the same physical server after the first request, to preserve sessions. This worked fine for our web apps until we added SSL to the setup. Now the user can successfully login, see the home page, but clicking on any other internal links logs the user right out. I traced the issue to the fact that while initial authentication is performed by server 1, clicking on internal links leads to having the request sent to server 2. Server 2 does not share sessions with server 1, and the user is kicked out. How can I fix it? Do I need to share sessions between the two servers? If so, could you point me to a good guide for doing this? Thanks.

    Read the article

  • Software RAID to hardware RAID, can it be done?

    - by gtaylor85
    Can it be done ... well. (For the record, I did not set this server up.) In my server there are 4 disks. 3 of them are in a software RAID5, and 1 has the OS installed. I want to buy a RAID controller, 4 new HDs and set up a hardware RAID5. If possible, I'd like to just image the current setup, and use it to build my new one. My questions are: Can I image a 3 disk RAID5 to 4 disk RAID5? Are there problems with this? What is considered best practice for your OS. To have it on a separate disk like it currently is, or to install it on the RAID5? Thank you. I can clarify anything. I'm not sure what other info might be pertinent.

    Read the article

  • Dualbooting Win7 and Gentoo, error

    - by Tommy Jakobsen
    Hello I'm trying to setup a dualboot with Gentoo Linux and Windows 7. Heres my partitions: /dev/sda1 /boot partition, ext2 /dev/sda2 win7 partition, ntfs /dev/sda3 swap partition, linux swap /dev/sda4 root partition, btrfs Using Grub, I can boot into Gentoo, but when I'm choosing to boot Windows 7, nothing happens. It just writes the Grub options for that choice, and then it hangs. grub.conf: default 0 timeout 30 title Gentoo root (hd0,0) kernel /boot/kernel-x86_64-2.6.31 root=/dev/sda4 title Windows rootnoverify (hd0,1) makeactive chainloader +1 Any ideas? Help will be much appreciated!

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >