Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 321/1663 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • connecting to a network using route command

    - by ami
    I have a computer with an external IP(192.168.223.220) and also an internal address (10.1.1.20) in order to connect to some servers that don't have external addresses only 10.1.1.xx . in order to connect to these servers from other machines I used the following command "route ADD 10.1.1.0 MASK 255.255.255.0 192.168.223.220" and than I was able to connect to the servers using there 10.1.1.xx address. The problem is that the hard disk of main server(192.168.223.220) died and was replaced and after the that I am not able to connect to the servers as before, the route command succeeds and I can ping 10.1.1.20 but not the other servers. Thanks I am using Windows XP and the print outs are D:\AurosHome\Scriptsipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : N100-master Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 Physical Address. . . . . . . . . : 00-30-48-34-BA-B9 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.225.180 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.225.254 DNS Servers . . . . . . . . . . . : 192.168.225.2 Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/1000 EB Network Connection with I/O Acceleration Physical Address. . . . . . . . . : 00-30-48-34-BA-B8 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 10.1.1.20 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Mellanox IPoIB Adapter Physical Address. . . . . . . . . : 00-02-C9-25-34-0D D:\AurosHome\Scriptsroute print Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 30 48 34 ba b9 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration #2 - Packet Sche duler Miniport 0x3 ...00 30 48 34 ba b8 ...... Intel(R) PRO/1000 EB Network Connection with I/O Acceleration - Packet Schedul er Miniport 0x10005 ...00 02 c9 25 34 0d ...... Mellanox IPoIB Adapter - Packet Scheduler Miniport =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.225.254 192.168.225.180 10 10.1.1.0 255.255.255.0 10.1.1.20 10.1.1.20 10 10.1.1.20 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.225.0 255.255.255.0 192.168.225.180 192.168.225.180 10 192.168.225.180 255.255.255.255 127.0.0.1 127.0.0.1 10 192.168.225.255 255.255.255.255 192.168.225.180 192.168.225.180 10 224.0.0.0 240.0.0.0 10.1.1.20 10.1.1.20 10 224.0.0.0 240.0.0.0 192.168.225.180 192.168.225.180 10 255.255.255.255 255.255.255.255 10.1.1.20 10.1.1.20 1 255.255.255.255 255.255.255.255 10.1.1.20 10005 1 255.255.255.255 255.255.255.255 192.168.225.180 192.168.225.180 1 Default Gateway: 192.168.225.254 Persistent Routes: None

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • RTL8188CE doesn't connect to any wifi access points

    - by Drakmail
    I'm using network manager to connect. Also, tryed iwconfig. Results are same. I even try to connect to open access point — results are same. More information: Drakmail@thinkpad-x220:~$ lspci | grep Network | grep -v Ethernet 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Drakmail@thinkpad-x220:~$ uname -a Linux thinkpad-x220 3.1.0 #1 SMP PREEMPT Wed Oct 26 02:19:49 UTC 2011 x86_64 Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz GenuineIntel GNU/Linux Drakmail@thinkpad-x220:~$ dmesg | tail -n 10 [ 846.901574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 906.812461] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 966.728810] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1026.639676] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 1030.925574] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin At this moment I try to connect to open wifi ap: [ 1031.252403] wlan0: direct probe to 00:24:8c:55:fa:ed (try 1/3) [ 1031.451943] wlan0: direct probe to 00:24:8c:55:fa:ed (try 2/3) [ 1031.651658] wlan0: direct probe to 00:24:8c:55:fa:ed (try 3/3) [ 1031.851354] wlan0: direct probe to 00:24:8c:55:fa:ed timed out [ 1086.544960] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin My distribution: Drakmail@thinkpad-x220:~$ cat /etc/*version AgiliaLinux release 8.0.0 (Sammy) (Something between Slackware and Archlinux). Also, I saw that wifi module to often trying to load a firmware file. Any ideas what it would be?

    Read the article

  • Reread partition table without rebooting?

    - by Teddy
    Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes, and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far (hdparm -z DEVICE, sfdisk -R DEVICE) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • locale: What is the LANGUAGE variable used for? (and when?)

    - by seya
    I am trying to understand the locales used in Linux. On my Ubuntu 11.10 system locale puts out the following: LANG=en_DK.UTF-8 LANGUAGE=en_GB:en LC_CTYPE=en_GB.UTF-8 LC_NUMERIC="en_DK.UTF-8" LC_TIME="en_DK.UTF-8" LC_COLLATE=en_GB.UTF-8 LC_MONETARY="en_DK.UTF-8" LC_MESSAGES=en_GB.UTF-8 LC_PAPER="en_DK.UTF-8" LC_NAME="en_DK.UTF-8" LC_ADDRESS="en_DK.UTF-8" LC_TELEPHONE="en_DK.UTF-8" LC_MEASUREMENT="en_DK.UTF-8" LC_IDENTIFICATION="en_DK.UTF-8" LC_ALL= (en_dk is for using international day format, continental European number formatting (1.234,56) etc.) I think I understand what the LC_* family does, that LANG is the fallback if one of them is not set and that LC_ALL sets all of the LC_* variables to its value. What I don't know yet, is what LANGUAGE is used for. The notation en_GB:en reminds me of the Accept-Language HTTP header. With the settings above it would mean, British English is used, if a translation for it exists. Otherwise any existing English translation (en_US, en_AU, ..., whatever) would be used. Am I right so far? Also what programs actually obey the LANGUAGE setting? In how far is it different from LC_MESSAGES? Unfortunately, man locale only documents the LC_* family. And searching the web for 'linux locale LANGUAGE' or similar is a mute point. (Of course language is a word often used when talking about locales, and it may also be shown just in the output of locale without being discussed). Does anybody of you can help me out there?

    Read the article

  • Linux servers in a (primarily) Windows (AD) environment

    - by HannesFostie
    When I arrived at my current position, our environment existed almost exclusively of Windows servers. However, I am a big fan of using Linux for certain applications, like the webgallery I was asked to set up, a simple SFTP server, Nagios for monitoring etc. I do fine setting these up, but not being the Linux expert, I am not sure how to properly join these servers to the domain and was therefor wondering what procedures or guidelines other people follow. We often use ping -a to quickly figure out the hostname of a certain server, but this does not seem to work for the linux machines, most likely because of the whole WINS/NetBios thing I assume. I just joined one server to the domain, but probably missed something cause it's not working even after a dnsflush. Next to that, the couple procedures I've found so far are pretty extensive and most of the time don't seem worth the time. Best case scenario, I download some kind of client (smbclient?), enter the domain name and maybe the server to use, supply an administrator password and that's it. Is that possible at all? Thanks

    Read the article

  • Plesk command working in manual script, not in cronjob

    - by dsaunier
    Hi, In order to install a hosting plan, I use Plesk's commands in SSH as specified in their official guide. When typed directly in SSH (Putty), it works perfectly. The line is as follows with obviously values hard coded when in CLI: /usr/local/psa/bin/domain --create '.$url.' -owner mynamehere -ip '.IP_SERVER_PLESK.' -status enabled -hosting true -hst_type phys -login '.$ftp_user.' -passwd '.$ftp_pw.' -www false -php true -php_safe_mode false -hard_quota 100M I then put that request in a php script that does other things after hosting is installed. Now for the weird part: when calling that script from CLI it also works fine, I do a ./myscript.php and it installs the hosting, then sends emails etc. However after I create a cronjob to have that same script called regularly, then the Plesk command fails. The cronjob is started in Plesk as */15 * * * * /usr/bin/php /home/scripts/myscript.php and it works fine for everything BUT the Plesk hosting install, that returns "Unable to read Control Panel configuration file" and therefore does not install the domain hosting. Still this is the same script that I call manually ! On that server are the PHP used to call a cronjob and the one used in CLI different ? What do I miss, help greatly appreciated ! Regards.

    Read the article

  • Static DHCP binding

    - by Alex
    Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one. I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else? Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)? Configuration on Cisco: ip dhcp pool MANUAL_BINDING0001 host 192.168.0.64 255.255.255.0 hardware-address dead.beef.1337 dns-server 192.168.8.11 default-router 192.168.0.254 domain-name verynicedomainigothere.cn PS. Is it mandatory to use client-name configuration line?

    Read the article

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • Centos 5.5 install PearDB

    - by John Gardeniers
    Disclaimer: I use Linux for some jobs but I am not a Linux admin. I have a Centos 5.4 machine which performs some server duties and doubles as a web site development machine. PHP 5.3.3 was installed from RPM with the --without-pear option. I now wish to use PearDB but can't figure out how to install it. If I run yum install php-pear-db, it comes back with Error: Missing Dependency: php = 5.1.6-27.el5_5.3 is needed by package php-devel-5.1.6-27.el5_5.3.i386 (updates). The only RPM I've found that looks like it might be close currently has a dead link, so I can't even try that. What would be the best way to go about this? Is there a way to reinstall from the RPM and include pear? Can I install the dependency without breaking the current installation? Should I try to uninstall the original PHP and reinstall it from source, complete with pear? I thought this might have been an SU question but the FAQ over there suggests otherwise.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Unwanted blank lines when committing from SVN

    - by Alon_A
    I'm using CentOS Linux 5.8 as a web server and tortoise SVN for synchronizing version of our code. We write the code in Windows 7 professional 64BIT with NetBeans and NotePad++. I'm committing the code files (.php) from the Linux Shell by this command: svn co svnFolder serverFolder --username **** --password **** The problem is, after committing the files, when I'm opening them directly from the server (for debugging) by NotePad++ (I'm doing View/Edit from Filezilla) I have extra blank lines. A code that looks like this on the localhost (On NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; Will look like this after committing to the server (Again, on NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; If I upload files by FTP, No blank lines are being added. How can I solve it ? Thanks.

    Read the article

  • Using iptables to make a VPN router

    - by lost_in_the_sauce
    I am attempting to make a VPN connection to a third party VPN site, then forward traffic from my internal computers (ssh and ping for now) out to the VPN site using IPTables. 3rd Party <- (tun0/eth0)Linux VPN Box(eth1) <- Windows7TestBox I am running on CentOS 6.3 Linux and have two network connections eth0-public eth1-private. I am running vpnc-0.5.3-4 which is currently connecting to my destination. When I connect I am able to ping the destination IPAddresses but that is as far as I can get. ping -I tun0 10.1.33.26 success ping -I eth0 10.1.33.26 fail ping -I eth1 10.1.33.26 fail I have my private network Windows 7 test box set up to have the eth1 (private) network of my VPN Server as its gateway and can ping him fine. I need IPTables to send the Windows 7 traffic out the VPN tunnel. I have tried for a few days many different IPTables configurations from this site and others, either the other examples are too simple or overly complicated. The only thing this server is doing is connecting to the VPN and forwarding all traffic. So we can "flush" everything and start from scratch here. It is a blank slate. #!/bin/bash echo "Define variables" ipt="/sbin/iptables" echo "Zero out all counters" $ipt -Z $ipt -t nat -Z $ipt -t mangle -Z echo "Flush all active rules, delete all chains" $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -t nat -A POSTROUTING -o tun0 -j MASQUERADE $ipt -A FORWARD -i eth1 -o eth0 -j ACCEPT $ipt -A FORWARD -i eth0 -o eth1 -j ACCEPT $ipt -A FORWARD -i eth0 -o tun0 -j ACCEPT $ipt -A FORWARD -i tun0 -o eth0 -j ACCEPT Again I have done many variations of the above and many other rules from other posts but haven't been able to move forward. It seems like such a simple task, and yet....

    Read the article

  • Need solution for Network/Servers.

    - by rehanplus
    Dear All, Please help me. I just joined a new Hospital and want some help managing my network. There are some requirements: Current Network: There is a D.S.L connection and that is terminated on a LINUX proxy and then connected to D-Link layer 2 switches and then providing internet to more then 200 PC's (Would be increasing to 1500 in couple of months). D-Link switches are not configured yet. Also there is one Database server Report server and an application server. In near Future Application should be accessed by local users as well as remote users from internet via our web server. We do have a sharing server and all these servers databases and PC's are on single sub net. Required Network: All i do want is to secure my network from outside access and just allowing specific users via web application and they will be submitting there record for patient card and appointment facility by means of application and entering there record (on our database) but not violating our network resources. Secondly in house users also need to access the same application and also internet but they must have some unique identity and rights (i.e. Finance lab dept. peoples do have limited access to that application). Notes: Should i create V LAN or break sub nets. Having a firewall will solve my issues? is a router needed on these type of scenario's. Currently all the access are restricted from Linux Proxy. Thanks.

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Running Ubuntu on Toshiba L630

    - by Toshibalin
    After suffering with the following error while trying to install Ubuntu onto my Toshiba L630 (kernel_thread_helper+0x0 0x10). I bypassed the error and managed to install the OS by going going into the "installation setup menu"( by pressing f6 while booting from my Ubuntu Installation CD) and assigning "acpi = off". I believe this fixes the problem because it stops the laptop doing checks that aren't compatible with the software. However, now that I have Ubuntu installed, I cannot run the bloody thing because I have no way of choosing "acpi=off" before booting Ubuntu. I guess this is a pretty broad question but I'm going to ask it anyways. Has anyone managed to find a version of linux that works on an L630 without any errors? If not, Is there a way to choose acpi=off before booting? Maybe by adding a line to the grub? Also, does anybody think I'm wasting my time? I read somewhere that toshiba laptops don't work well with linux. So if there isn't a fix for this I would appreciate being alerted about this. Cheers

    Read the article

  • Can I remove the original file while running "sort"?

    - by Spaceman
    I'm sorting a huge file, around 400 gigabytes. I'm running out of the disk space and I must do something quickly. Let's assume the original file is called original_file. So I execute (simplified) it as "sort original_file | gzip -c output_file" I use /home/tmp as a temporary dir. From what I see, there are a lot of intermediate files, like so: tmpA465 tmpB154 ... and so on. The smallest ones have size 12 megabytes. The largest have ~182 megabytes. So, it seems that the "sort" command have already split the original file into small pieces, and have sorted them, and now it is merging them into bigger parts (which will be, eventually, sorted as well). Please correct me if I'm wrong. Can I remove the original file right now without terminating the sort process? I've been waiting for a few days for that and it's important that the "sort" command will not fail and I will get the result file, finally. The OS is Ubuntu server 13.04, x64. Thanks!

    Read the article

  • Installing Debian 7.6.0 on Lenovo Y50

    - by Girauder
    I was trying to install Debian on my new laptop: a Lenovo Y50 64bit running Windows 8. I got together with a friend and installed Debian in his computer first and had no problems. However I've tried to install Debian several times using the AMD64 KDE and netinst versions and accomplished nothing. First try: installed the KDE version. Grub would let me choose which operating system I wanted, but when I selected Debian it would only load the command line. Second try: Reinstalled but this time with the netinst version. I only got a black screen where I could type but nothing else. Third Try. Tried the netinst again. This time after making the partitions I got a message that said that no EFI partition was found. I ignored the message and this time it wouldn't even load Grub. only a command like interface with grub rescue or something. Not once did I get an error during the installation. What am I doing wrong? I assume the problem is I need to make an EFI partition or something like that. So why is it that during the first installations I didn't ask me for that. And if that is indeed the problem, How can I solve it? Update So the installation failed again... as predicted. Here you can find the Disk Management picture. http://postimg.org/image/433cpfkjz/ Please somebody help me. I keep getting the grub rescue thing. secure boot is disabled and legacy support is set first.

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >