Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 848/974 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • Migrating JBoss installation and install it on a PHP server

    - by David Martinez
    I'm configuring a new dedicated server that is going to run 3 sites, 2 of then are migrating from a old server. Each site have it's own domain and dedicated ip. 2 of this sites are already up and running on php (one of then use cakePHP), the third site is a migration from an old server and it runs on JBoss. 1) Is it possible to have both Jboss and php running on the same Apache instance, or would I have to install a new one? 2) Can I just move the old JBoss server directory to the new server and start the server with the shell script? From what I red here JBoss is distributed as a zip/tgz file with the server structure, so moving it from the old server to the new one should be the same. I want to do this because the old server is already configured, and it have 2 JBoss instances. I didn't develop this site and I don't have experience with JBoss. I have some documentation of the site, but it is not much, mostly server structure and the technology they used. The new server runs on CentOS with CPanel, I have full root access to the server. This question is similar to this one How can I run JBoss Application Server and Apache on the same server? but there he didn't have a dedicated IP for each domain.

    Read the article

  • The Story of secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry if the story is boring and messy, but most of it is real! =) /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry for this boring and messy story! /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • How to configure network on Windows Server 2008

    - by Gokhan Ozturk
    I have a IBM x3400 Server Machine with Windows Server 2008 R2 installed on it. But, since I am not expert on networking I have some problems. These roles installed on my server: Active Directory DNS File Sharing Hyper-V ISS VPN There is two network card on them. I configured them like this: Local Connection 1: 192.168.30.3 255.255.255.0 192.168.30.2 127.0.0.1 Local Connection 2: 192.168.30.101 255.255.255.0 192.168.30.6 127.0.0.1 My problem is, when I use this Ip gateways, It is sharing internet to all computers. This is not I want. I want to use Local Connection 1 for internal network. I am giving all computers gateway and DNS IP as 192.168.30.3 The Local Connection 2 is for Hyper-V and VPN connections. 192.168.30.2 and 192.168.30.6 are my modem's gateways. I am using 192.168.30.6 external IP for VPN connections. There is two 24 port switches. There is a connection between them and this two ethernet card connected directly to them. And modems are connected to switches as well (Morems are not near the server. They are somewhere in the building). I disabled network Bridge and removed all ethernet cards from it. With this configuration, all computers can ping my server's IP (192.168.30.3) but on server I cannot ping any clients (Request timeout). What is the best way to configure my network? Thank you. Redgards

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Glassfish v3 failure when startup. "Cannot allocate memory "

    - by Shisoft
    It is clear in this Question Fail to start Glassfish 3.1: java.io.IOException: error=12, Cannot allocate memory But in my case,I have a 512M memory Ubuntu 10.04 vps.It seems that I don't need to change any configure.But when start the server,I got this exception VM failed to start: java.io.IOException: Cannot run program "/usr/lib/jvm/java-6-sun-1.6.0.22/bin/java" (in directory "/home/glassfish/glassfish/domains/domain1/config"): java.io.IOException: error=12, Cannot allocate memory So,I set <jvm-options>-Xmx512</jvm-options> to <jvm-options>-Xmx400</jvm-options> The exception remains.What did I do something wrong? result of free -m total used free shared buffers cached Mem: 512 43 468 0 0 0 -/+ buffers/cache: 43 468 Swap: 0 0 0 result of cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 146049: kmemsize 2670652 5385253 51200000 51200000 0 lockedpages 0 8 2048 2048 0 privvmpages 11134 134522 131200 262200 4 shmpages 648 1352 128000 128000 0 dummy 0 0 0 0 0 numproc 12 73 500 500 0 physpages 6519 28162 0 200000000 0 vmguarpages 0 0 512000 512000 0 oomguarpages 6527 28169 512000 512000 0 numtcpsock 4 14 4096 4096 0 numflock 0 5 2048 2048 0 numpty 1 2 32 32 0 numsiginfo 0 3 1024 1024 0 tcpsndbuf 159600 265744 20480000 20480000 0 tcprcvbuf 65536 3590352 20480000 20480000 0 othersockbuf 44232 90640 20480000 20480000 0 dgramrcvbuf 0 12848 10240000 10240000 0 numothersock 22 31 2048 2048 0 dcachesize 0 0 10240000 10240000 0 numfile 1002 1474 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 24 24 2048 2048 0 Thanks

    Read the article

  • Accessing clearcase view drive from virtual machine is slow

    - by PermanentGuest
    I have a windows XP virtual machine running under a Windows XP host. On the host : On the host clearcase 7.1.1.2 is installed. I have a dynamic view mapped onto some drive. The view has certain VOB/directory structure where my application DLLs from the nightly build and config files are stored. I run my application on the host machine which uses the DLLs and config files from the VOB and everything runs smooth. Now I want to move this set-up to a virtual machine. On the guest : I'm running the guest with a vm-player. I don't want to install clear-case on this as I don't want to expose this machine onto the network. The network setting in the guest is 'host-only'. I have mapped the host's clearcase view drive as a shared folder and I'm able to access this drive from the virtual machine. Also, the application is running. However, the problem is that the access of the clearcase drive from the virtual machine is very slow. I can experience this from the windows explorer. Due to this, the starting of my application takes several seconds in the virtual machine while on the guest it comes up pretty fast. My question is : Is there any way to speed up the performance? I have managed to copy some of the DLLs which don't change frequently to the virtual machine to improve the performance. However, there are still lot of DLLs which have to be taken from the clearcase drive as they change frequently. VMplayer version is : VM Player 3.0.1 build-227600 Both guest and host is : Windows XP service pack 3 Host clearcase is : clearcase 7.1.1.2

    Read the article

  • Office Compatibility Pack and File Permissions

    - by hymie
    MS isn't my thing, so I hope somebody can give me a pointer. We have a Windows domain, with a Server-2003-SP1-Enterprise file server. One of the specific files is a MS Excel 2007 (XLSX) file created by user LK. In the "Security" preferences setting, about a half-dozen users (including me) have access to this file. LK is the owner and has "full control", while the rest of us have "Read" , "Read & Execute", and "Write" permission. LK is also the owner of the directory that this file resides in. I don't know if that's relevant. So far so good. My desktop machine has Windows XP SP3 , and Excel 2003 SP3 , and the "Office Compatibility Pack" which lets me read and write the new XLSX files. However, whenever I write the file, the permissions are changed. The newly-written file only has permissions for LK and me, and both are "Full control" So in short, what am I doing wrong, and how should I set this up to do it right, keeping the permissions on the file that were there when I started?

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • Ngix rewrite is not working as expected

    - by SamFisher83
    I am trying to learn how to use nginx and how to use its rewrite functionality Nginx seems to be doing the rewrite: 2012/03/27 16:30:26 [notice] 16216#0: *3 "foo.php" matches "/foo.php", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" 2012/03/27 16:30:26 [notice] 16216#0: *3 rewritten data: "img.php", args: "", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" but in my access log I am getting the following: 61.90.22.223 - - [27/Mar/2012:16:26:54 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" 61.90.22.223 - - [27/Mar/2012:16:30:26 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" There is an img.php in the root directory so I am not sure why I am getting a 404 error Here is part of the configuration block: rewrite foo.php img.php last; location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; }

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • secure user-authentication in squid: The Story

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry for this boring and messy story! /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • nginx hackery : change image file every X request

    - by Vangel
    Let me describe what I am trying to do first. I have a bunch of pictures in a directory called /images/*.(jpg|gif|png|blah blah|) Now say these images are embedded in an html page and I dont really care which image or where its embedded. For every 10th request for the same picture file (if possible) or for any picture I want to display a fixed image (e.g. trollface.jpg). thats it! I have searched around a bit but i am not even sure what I am looking for. Rewrite might help but then its a permanent thing. this has got to do something with requests. I have heard perl scripts can be used with nginx. I can't write an nginx module (though I did bravely lookup the docs and then gave up) Before you ask "But why don't you do it in application, noob?". This is a static files only server. The point is to not execute any binary at all.

    Read the article

  • Tortoise SVN, find all non-added files?

    - by John Isaacks
    I am using Tortoise SVN on Windows. I have a deep directory structure (using cakephp). After creating several files I then went through and +added them. The new files show a ? next to them and a + next to them after set to add so it is easy to tell which files still need to be added. this is the same with folders. However if a new file is inside an old folder there will still be the green checkmark. You won't have any indicator telling you there is a new file. So I forgot to add one before a commit. This wouldn't be so bad as I could just add it and recommit. However, I also made this a tag, so I had recommit to the tag and the trunk. Anyways this wouldn't have been an issue if there was an indicator on existing folders that contain new files. Is there a way to set this up?

    Read the article

  • NginX & Munin - Location and error 404

    - by user1684189
    I've a server that running nginx+php-fpm with this simple configuration: server { listen 80; server_name ipoftheserver; access_log /var/www/default/logs/access.log; error_log /var/www/default/logs/error.log; location / { root /var/www/default/public_html; index index.html index.htm index.php; } location ^~ /munin/ { root /var/cache/munin/www/; index index.html index.htm index.php; } location ~\.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/default/public_html$fastcgi_script_name; } } but when I open ipoftheserver/munin/ I recieve a 404 error (when I request ipoftheserver/ the files on /var/www/default/public_html are listened correctly) Munin is installed and works perfectly. If I remove this configuration and I use this another one all works good (but not in the /munin/ directory): server { server_name ipoftheserver; root /var/cache/munin/www/; location / { index index.html; access_log off; } } How to fix? Many thanks for your help

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • How to manage unprivileged administration of system services using Debian?

    - by ypnos
    At our lab, we have several services handled by different phd students (like myself). Fluctuation is high and people do the job next to their research duties. Until now, services were running on different machines, with different OS setups that can result in administration hell quickly. We want to consolidate our service setup. Our main idea is that the guys responsible for the services should not meddle with the underlying system anymore. Apart from core systems like NFS and kerberos, a typical service is able to run as non-root already. I'm talking about apache, mysql, subversion, mail with openxchange, and so on. Redirecting privileged ports is also no issue (source). What is left is the configuration of the service and its payload. One scenario we envisioned is that every service has its own user and home directory, accessable by the corresponding admins. Backup and fallback of the service is easy, as everything needed for the service to run is found in one place. Are there established ways to create such a setup? Does a mostly unique method exist to make services find their files (other than in system directories) while still using the corresponding debian packages? Are there any catches with our idea that we may have overlooked? Would you maybe claim that virtualization is the answer to our problem? (In our POV, it wouldn't help us keeping system setup strictly separated from service setup.) Thank you for any advice!

    Read the article

  • Can't start Windows 7 after cloning HDD

    - by Paul
    Brief description: cloned HDD1 - HDD2 HDD1 partition 1 boots HDD1 partition 2 boots HDD2 partition 1 boots HDD2 partition 2 doesn't boot Windows, but is bootable in general Now verbosely: In all the cases computer is the same. I have two Windows 7 installations on HDD1 - both are booting fine. I choose between them using standard Windows 7 boot loader menu. Technically there are 4 partitions: 100 MB Boot loader partition (active), Windows 7 copy 1 (25 GB), Windows 7 copy 2 (150 GB) and Working partition. All are primary. In past few days I tried to clone the whole HDD1 to HDD2 of the same size (but 2,5 inch form factor) as is using Minitool Partition wizard. Everything has been copied, all files are accessible, no faults in file system structure, even boot loader wasn't damaged and I hadn't to repair it. But I can boot only first installation of Windows 7 (it boots without issues). When I choose the second installation, I get immediately a completely black screen without any texts, cursors and other data. HDD isn't accessed after that. This black screen is sensitive to Ctrl-Alt-Delete which causes computer reboot. I did some experimenting: Installed Windows 7 to that partition - it booted fine. Then I renamed "Windows" to "Windows.old" and copied Windows directory from HDD1 as it was, using Far Manager, and got the same troubles - black screen. (Of course I performed renaming and copying from other copy of Windows). So, it seems that problems are inside this installation of Windows, somewhere in its files.

    Read the article

  • make file readable by other users

    - by Alaa Gamal
    i was trying to make one sessions for my all subdomains (one session across subdomains) subdomain number one auth.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; $_SESSION['stop']='stopsss this'; print_r($_SESSION); subdomain number two anscript.site.com/session_test.php session_set_cookie_params(0, '/', '.site.com'); session_start(); echo session_id().'<br />'; print_r($_SESSION); Now when i visit auth.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array ( [stop] => stopsss this ) And when i visit anscript.site.com/session_test.php i get this result 06pqdthgi49oq7jnlvuvsr95q1 Array () session id is same! but session is empty after two days of failed trys, finally i detected the problem the problem is in file promissions the file is not readable by the another user session file on my server -rw------- 1 auth auth 25 Jul 11 11:07 sess_06pqdthgi49oq7jnlvuvsr95q1 when i make this command on the server chmod 777 sess_06pqdthgi49oq7jnlvuvsr95q1 i get the problem fixed!! the file is became readable by (anscript.site.com) So, how to fix this problem? How to set the default promissions on session files? this is the promissions of the sessions directory Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >