Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 398/1762 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Can one config LDAP to accept auth from ssh-agent instead of from Kerberos?

    - by Alex North-Keys
    [This question is not about getting your LDAP password to authenticate you for SSH logins. We have that working just fine, thank you :-) ] Let's suppose you're on a Linux network (Ubuntu 11.10, slapd 2.4.23), and you need to write a set of utilities that will use ldapmodify, ldapadd, ldapdelete, and so on. You don't have Kerberos, and don't want to deal with its timeouts (most users don't know how to get around this), quirks, etc. This resolves the question to one of where else to get credentials to feed to LDAP, probably through GSSAPI - which technically doesn't require Kerberos despite its dominance there - or something like it. However, nearly everyone seems to have an SSH agent program, complete with its key cache. I'd really like an ssh-add to be sufficient to allow passwordless LDAP command use. Does anyone know of a project working on using the SSH agent as the source of authentication to LDAP? It might be through an ssh-aware GSSAPI layer, or some other trick I haven't thought of. But it would be wonderful for making LDAP effortless. Assuming I haven't just utterly missed a way to use ldapmodify and kin without having to type my LDAP passwords - using -x is NOT acceptable. At my site, the LDAP server only accepts ldaps connections, and requires authentication for modifying operations. Those are requirements, of course. Any ideas would be greatly appreciated. :-)

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • Enabling Hyper-V Integrated Services Time Sync Services versus Internet Time Synchronization

    - by cpuguru
    Should I deselect the "Synchronize with an Internet Time Server" checkbox under the VM's "Date and Time - Internet Time Settings" tab if the "Time Synchronization Service" for a Hyper-V-based Virtual Machine is enabled? One of the Integration Services that Hyper-V provides is the Time Synchronization Service, which can be enabled/disabled by going to a VM's Settings-Integration Services setting in the Management section. I believe this is checked by default. When you install a Windows Server 2008 OS in a VM on the Hyper-V server, it comes with the "Synchronize with an Internet Time Server" option set, pointing to "time.windows.com". I'd think that if the parent Hyper-V server is set to one time server, and the child VM is pointing to a different time server, there would be a momentary blip if the two are not spot on with their times when the synchronization services run. So the question is, which time sync service should I use? I'm assuming not both. And what is the advantage of one over the other? Note: This question assumes that the machines are not joined to a domain. If they were, the machines would also try to update their time against the domain controller with the primary domain controller role too, right? Thanks!

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • Can't get an IBM xSeries 345 server to load Windows Server 2003 using ServerGuide utility

    - by Kyle Noland
    I have a client that has an IBM xSeries 345 eServer. Per the IBM support website, I have downloaded the ServerGuide Setup 7.4.17 installation ISO and burned a bootable CD. The CD boots fine and loads the utility. I walk through the following screens without any issue: Set the date and Time Detect the IBM ServeRAID card and install the latest firmware Clear the hard disks Set up the RAID array The next step is format the NOS partition. I select my partition size and the utility goes through the following steps: Creating NOS partition Formatting NOS partition (NTFS) Copying W32 files The copying W32 files takes about 10 minutes. I see the CD drive and disks working hard. When the copying is complete, I'm taken to a blank page just NOS Partitioning at the top. At the bottom of the screen are the familiar Back and Exit buttons. I see the place where the Next button should be, and if I click on it I can tell there is something there, but the space is empty. No button is displayed and clicking the empty spot doesn't ever take me to the next screen. I can't load the OS until I get past this part. I have already tried: Burning multiple copies and versions of the ServerGuide CD Letting the final screen just sit there over the weekend thinking it might advance after syncing the drives or something Has anybody else seen this? I'm really at a loss here. EDIT: I found another person who has the exact same problem as me: http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14451763

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Commandline program to extract archives with automatic subdirectry detection

    - by ??????
    The title already says it. What I'm looking for is essentially the pure commandline counterpart to ark -ba <path> (on KDE), or file-roller -h <path> (on GNOME/Unity). Unfortunately, both ark and file-roller require X to be running. I'm aware that it is relatively simple to write a tool that detects archives based on their file extension, and then runs the appropiate program: #!/bin/bash if [[ -f "$1" ]] ; then case $1 in *.tar.bz2) tar xjvf $1 ;; *.tar.gz) tar xzvf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) rar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjvf $1 ;; *.tgz) tar xzvf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted with this utility" ;; esac else echo "path '$1' does not exist or is not a file" fi However, that doesn't take care of subdirectory detection (and in fact, many extraction programs do not even supply such an option). So might there be a program that does exactly that? I wasn't sure whether or not to ask on askubuntu.com, because this question isn't really about Ubuntu, but rather about any Linux operating system. My apologies if this question does not fit in here.

    Read the article

  • Fixing partitions and Installing BackTrack

    - by Josh
    My whole problem started when I started trying to install Backtrack(3 or 4) Backtrack was trying to install itself over my entire windows partition (Which I had combined into one when I installed windows 7). So I booted back into windows 7 on my netbook (eee pc 1000 HE btw) I went into disk-manager with the aim of making a partition to install backtrack on but came out with a really screwed up drive. So I had two partions when I started: the windows system partition, and then my main partition and they were blue in diskmanager (I think that has something to do with formatting). After I went through the steps to make a 10 GB FAT32 partition for backtrack I had about five partitons one called PE: that I have no Idea what it is the windows system file, my main partiton 10 GB unallocated space, and two other partions under 50MB each that are both unused space. And they were all converted to simple volumes (Green instead of blue). And backtrack still wants to erase my entire drive. Question number 1: How do I get it back to the way it was? Question 2: How to I get backtrack to dual boot on my netbook?

    Read the article

  • Oracle with Kerberos authentication and Windows 2003 Server as KDC

    - by Supaplex
    I am running Oracle 10.2 on a Windows 2003 Server SP2 which is also the domain controller on the network. I wish to switch authentication method from NTS to Kerberos. I have spent a lot of time trying to configure Oracle with Kerberos authentication from the Oracle Advanced Security option from the Net Manager utility. I have disabled NTS so Kerberos is promoted as the preferred authentication method. But as soon as the configuration is saved from Net Manager and I restart the Oracle server service, Oracle will not start. I don't know what Oracle is complaining about, because I don't know where to look for the Oracle error log. My first question is: how can I figure out what's bugging Oracle? My second question: is there a good tutorial for setting up Oracle on a Windows 2003 with Kerberos Authentication, where the Windows 2003 Server is the KDC? Maybe there is a book I can get? I have read Oracles own guide, but it is mostly for Linux/Unix. Thanks a lot!

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Can't boot flash drive on GIGABYTE motherboard

    - by Deltik
    Situation When I try to boot from my flash drive, my GIGABYTE 970A-UD3 motherboard returns this: Loading Operating System ... Boot error All other motherboards I've tried support booting from that flash drive (and a backup flash drive). The operating systems I tried on both flash drives were created with usb-creator-gtk (Ubuntu USB Startup Disk Creator). I know that the motherboard understands that there is an operating system on the flash drives because when I erase them, it complains in an ALL CAPS RAGE that there isn't an operating system, which is correct. How can I boot a flash drive that's bootable from other motherboards on this motherboard? Qualification This question is not a duplicate of this one because directly writing to the flash drive as an ISO 9660 (dd if=operating_system.iso of=/dev/sdb) still does not have the motherboard recognize the operating system. This question should be a duplicate of this one because I provide more information not provided by that poster. This forum thread has broken links and does not have a solution to my problem. Nobody knows what's going on in this forum thread.

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • Are HDMI to VGA Adapters Really Device-Specific?

    - by allquixotic
    There are a lot of devices on the market right now (especially mobile devices) with a Micro-HDMI or Mini-HDMI port and no VGA or D-Sub output. Most manufacturers of said devices sell a cable that looks something like this: I have yet to find a cable like this that claims to work on a wide array of devices. In general, these cables claim to work with one specific device only. The way these cables work, I think, is that analog VGA signals are sent from the HDMI port on the device. This should work for devices that have special hardware on the motherboard/GPU capable of driving this. Is it the case that these cables have to be custom designed for each device? Or, is it rather that any device which possesses this special "signaling of analog VGA over the HDMI port" can be made to work with a cable that is physically compatible (i.e. the HDMI end plugs into the device and the VGA end accepts a VGA monitor cable)? Note that I am not looking for a product recommendation, just a conceptual clarification on what exactly these devices are doing. Also, a few remarks: The cables like the one depicted here are not digital to analog converters. I know about these: they are expensive, and they are the ONLY solution if your device only outputs a digital signal and is incapable of driving analog VGA over the HDMI port. The cables like the one depicted here are not straight crossover cables from VGA to HDMI, either. The crossover cables are designed to send a digital HDMI signal over the VGA port's wires; that is, the wire protocol is HDMI (digital) but the physical pinout is the same as VGA, even though nothing analog is happening. Once again, this is not the behavior that, I believe, the devices which I'm talking about in this question are doing. The cabling and devices that this question is about transmit the analog VGA data over the HDMI port (the HDMI port is in the device outputting the data, and the VGA side is the monitor/projector).

    Read the article

  • Eclipse grinds to a halt when building workspace

    - by Chris Thompson
    Hi all, This is a bit of a vague question because, frankly, I don't even know where to begin diagnosing the issue. My eclipse (Galileo) installation grinds to a complete halt when it's building the workspace -- to the point where I can't even type. I know the Android SDK I have installed is a major culprit because I can watch the memory usage go through the roof (through the built-in heap monitor) when the Android SDK content loader starts up. Every time I save a file though, the program just stops. The message at the bottom of the screen says Building workspace (74%) and sits there for about 30 or so seconds before completing and returning the performance to normal. I have a few other plugins installed (Maven, SVN, etc) but I'm assuming the main issue is Android. Has anybody had similar issues or any luck correcting this sort of problem? If there's anymore information you think would be helpful, just let me know...I didn't want to do a core dump on this question... I'm running it on Windows 7 64-bit for what it's worth. Thanks! Chris

    Read the article

  • How to determine which ports are open/closed on a FIREWALL?

    - by Rahl
    It seems no one has asked this question before (most regard host-based firewalls). Anyone familiar with port scanning tools (e.g. nmap) knows all about SYN scanning, FIN scanning, and the like to determine open ports on a host machine. Question is though, how do you determine the open ports on a firewall itself (disregard whether the host you're trying to connect to behind the firewall has those particular ports open or closed). This is assuming the firewall is blocking your IP connection. Example: We all communicate with serverfault.com through port 80 (web traffic). A scan on a host would reveal port 80 is open. If serverfault.com is behind a firewall and still allows this traffic through, then we can assume the firewall has port 80 open also. Now let's assume the firewall is blocking you (e.g. your IP address is under the deny list or is missing in the allowed list). You know port 80 has to be open (it works for appropriate IP addresses), but when you (the disallowed IP) attempt any scanning, all port scan attempts on the firewall drop the packet (including port 80, which we know to be open). So, how might we accomplish a direct firewall scan to reveal open/closed ports on the firewall itself, while still using the disallowed IP?

    Read the article

  • Is one server on a vlan unnecessary?

    - by moomoochoo
    DETAILS I've been researching web hosting solutions in Japan. Based on this question one of the services available seems to be a VLAN. I've read about the advantages of such a system for a large organization, but there doesn't seem to be much information regarding smaller setups. I take that to mean that for one server it is likely to be unnecessary? My concern is that I don't know how many other servers are on the WAN, so regardless of how many servers I use a VLAN might still be a good idea. SERVER INFO One dedicated server would be used. It would not be virtualized. My Research so far Based on comments here, a VLAN would be useful for mitigating these problems. A user on another server could, either mistakenly or maliciously, assign one of your IP addresses to their server, resulting in a "duplicate IP" situation that would cause connectivity issues. A user on another server could poison the arp cache and potentially redirect traffic to snoop on communication intended to/from your server. (later in the discussion this point was said to be unrealistic.) QUESTION Is it worthwhile getting a vlan for one dedicated server? Will it be easier/the same/ harder to manage?

    Read the article

  • Environment variables in bash_profile or bashrc?

    - by Viriato
    I have found this question [blog]: Difference between .bashrc and .bash_profile very useful but after seeing the most voted answer (very good by the way) I have further questions. Towards the end of the most voted, correct answer I see the statement as follows : Note that you may see here and there recommendations to either put environment variable definitions in ~/.bashrc or always launch login shells in terminals. Both are bad ideas. Why is it a bad idea (I am not trying to fight, I just want to understand)? If I want to set an environment variable and add it to the PATH (for example JAVA_HOME) where it would be the best place to put the export entry? in ~/.bash_profile or ~/.bashrc? If the answer to question number 2 is ~/.bash_profile, then I have two further questions: 3.1. What would you put under ~/.bashrc? only aliases? 3.2. In a non-login shell, I believe the ~/.bash_profile is not being "picked up". If the export of JAVA_HOME entry was in bash_profile would I be able to execute javac & java commands? Would it find them on the PATH? Is that the reason why some posts and forums suggest setting JAVA_HOME and alike to ~/.bashrc? Thanks in advance.

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • Issue running 32-bit executable on 64-bit Windows

    - by David Murdoch
    I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: (running from cmd.exe) C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >