Search Results

Search found 26742 results on 1070 pages for 'linux kernel'.

Page 514/1070 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Start vino-server (VNC) before login on Linux CentOS

    - by Dr. Gianluigi Zane Zanettini
    I'm using the default vino-server package to access my CentOS 6 workstation via VNC. It works ok, but only AFTER I locally login on the workstation. I need to have vino-server start BEFORE the login, right at the Gnome login screen where I choose username and password. Due to personal reasons, I need to use Vino and not vnc-server or any other packages. I already tried to insert /usr/libexec/vino-server & in /etc/gdm/Init/Default but this didn't solve the issue.

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • Read a specific range of lines from a file using c

    - by James Joy
    I have the following content in a file: hhasfghgsafjgfhgfhjf gashghfdgdfhgfhjasgfgfhsgfjdg jfshafghgfgfhfsghfgffsjgfj . . . . . startread hajshjsfhajfhjkashfjf hasjgfhgHGASFHGSHF hsafghfsaghgf . . . . . stopread . . . . . . jsfjhfhjgfsjhfgjhsajhdsa jhasjhsdabjhsagshaasgjasdhjk jkdsdsahghjdashjsfahjfsd I need to read the lines from the next line of startread till the previous line of stopread using a c code and store it to a string variable(of course with a \n for every line breaks). How can i achieve this? I have used fgets(line,sizeof(line),file); but it starts reading from the beginning. I don't have the exact line number to start and stop reading since the file is written by another C code. But there are these identifiers startread and stopread to identify whereto start reading. Operating platform is linux. Thanks in advance.

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • Unable to ssh to a Linux VM after a day

    - by jogabonito
    I have a machine running 4 VMs on it. There is one Fedora VM which is causing me some trouble. The IPs of the VMs are something like 10.100.100.* I have a Windows PC which is in the same network. It has an IP 10.100.25.77. When I reboot the Fedora VM, I am able to ping it from my Windows PC as well as use putty to ssh to it. The next day, I cant ping it or ssh from my Windows PC. However I can ping and ssh to the other VMs on the machine. If I ssh to one of the other VMs, I can ping and ssh to the Fedora VM. Next if I restart it, things get back to normal and I can access it without any issues. The IP of the VM doesn't change after rebooting and it is statically assigned I would like to know what is causing this and how to get it fixed. As a last resort, I am thinking of running a cron job to restart the VM every night, it is not a critical server, but will be generally used occasionally in the day time.

    Read the article

  • Linux Mint Debian Cinammon soundcard selection

    - by Peter Gridling
    i use said OS and i am missing a window or setting to set the default soundcard. I have a builtin analog soundcard which works just fine and is set as default, also i have a usb headset which works fine eg. with mumble i set input and output on my headset and it just works, also the mute button and volume control on the headset works, but on the default soundcard lol. what i am missing is to set my headset as the default soudcard. i also tried some stuff from there: http://wiki.debian.org/ALSA/ but it seems that alsa is not supported 100% :( some1 here to help? best regards

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • linux shell, keep task alive for specified time

    - by liori
    Hello, I am looking for a tool to keep a task alive (restarting if necessary) for a specified amount of time (in seconds), and then kill it and stop. For example: keep_for 3600 rsync foo bar:faz should for try to synchronize a directory (restarting rsync in case of f.e. dropped connection), but explicitly kill rsync and stop respawning it after an hour. I tried to write a shell script for that, but it was surprisingly difficult to write, with lots of edge cases (like child process not getting killed, or shell escaping issues). So... maybe there already is a tool for that?

    Read the article

  • remote linux support service

    - by James
    Hi after struggling with a wireless adapter installation for a few days, I am wondering if there is some service that will do it remotely. I am not talking about an enterprise type remote support center. I am thinking about a similar service for home PCs anyone aware of such a service ?

    Read the article

  • linux: selective sudo access for a particular command

    - by bguiz
    Hi, Is it possible to grant a particular user sudo access for one particular command only? Thanks -- More info: We farm out lengthy optimisation runs to each other's boxes over ssh. These runs take hours, sometimes days. The shutdown command can only be run in sudo. Being conscious of my environmental footprint, I would like to give the initiator(s) of these runs sudo access to the shutdown command on my box, without sudo access for everything else - so that they may shutdown my machine when they no longer need it. I am aware that I can schedule a shutdown before I leave my box, but I am looking for a better solution.

    Read the article

  • Available filesystems for Linux that are case-insensitive?

    - by David
    I have a client whose web application was written entirely in a windows environment and served from windows. Unfortunately there's way to many cases of get "file/At/Somelocation.php" where the file is actually something horrible like "File/at/SomeLocation.PHP". I really don't want to be forced to work in Windows but it will take weeks if not longer to fix all the casing issues. Am I SOL here?

    Read the article

  • Make Errors: Missing Includes in C++ Script?

    - by Abs
    Hello all, I just got help in how to compile this script a few mintues ago on SO but I have managed to get errors. I am only a beginner in C++ and have no idea what the below erros means or how to fix it. This is the script in question. I have read the comments from some users suggesting they changed the #include parts but it seems to be exactly what the script has, see this comment. [root@localhost wkthumb]# qmake-qt4 && make g++ -c -pipe -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -Wall -W -D_REENTRANT -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/lib/qt4/mkspecs/linux-g++ -I. -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include -I. -I. -I. -o main.o main.cpp main.cpp:5:20: error: QWebView: No such file or directory main.cpp:6:21: error: QWebFrame: No such file or directory main.cpp:8: error: expected constructor, destructor, or type conversion before ‘*’ token main.cpp:11: error: ‘QWebView’ has not been declared main.cpp: In function ‘void loadFinished(bool)’: main.cpp:18: error: ‘view’ was not declared in this scope main.cpp:18: error: ‘QWebSettings’ has not been declared main.cpp:19: error: ‘QWebSettings’ has not been declared main.cpp:20: error: ‘QWebSettings’ has not been declared main.cpp: In function ‘int main(int, char**)’: main.cpp:42: error: ‘view’ was not declared in this scope main.cpp:42: error: expected type-specifier before ‘QWebView’ main.cpp:42: error: expected `;' before ‘QWebView’ make: *** [main.o] Error 1 I have the web kit on my Fedora Core 10 machine: qt-4.5.3-9.fc10.i386 qt-devel-4.5.3-9.fc10.i386 Thanks all for any help

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • create vmware virtual machine via command line on linux system

    - by tom smith
    evaluating/investigating vmware, and how you create a "virtual machine" using the command line for rhel/centos. basically, i want to be able to create a test virtual machine and then be able to run the VM on another system using the virtual player. so, i'm looking for pointers/articles/instructions that detail what i need (in terms of tools/apps) and the steps needed to accomplish this. i've seen a few articles/sites that discuss creating virtual machines, but they all involve using the GUI. thanks update:: while vmware is the company. there are different tools/apps provided to create a Virtual Machine. Basically, I want to do a test, to ultimately have a Virtual Machine/Image that can be run on a separate server using the vmplayer app I've seen docs that discuss using the GUI to create the VM, but haven't found any (yet) that discuss how to accomplish this using the command line approach. thanks...

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • Cannot redirect ip traffic with iptables to new ip on linux centOS

    - by Kiwi
    today I able to migrate some of the game servers to another server and needed some help to redirect the traffic from old ip to the new one. SERVER1 1.1.1.1 ----- (internet ) ----- SERVER 2.2.2.2 I asume to use iptables to perform this, for that used this rule on my centOS box in the server1. /etc/sysctl.conf: net.ipv4.ip_forward = 1 iptables -t nat -A PREROUTING -p udp --dest 1.1.1.1 --dport 27015 -j DNAT --to-destination 2.2.2.2:27015 iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A POSTROUTING -d 2.2.2.2 -p udp --dport 27015 -j SNAT --to 1.1.1.1 But the client cannot connect to the server from the old ip, the redirection don't started.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • linux macvlan - stop from broadcasting hostname

    - by staticfloat
    I am trying to simulate two different computers on one box, using the macvlan module (which is awesome, by the way) but I have one small problem; When I create the macvlan Ubuntu 11.10 very helpfully starts broadcasting its hostname on both interfaces, creating an amazing amount of confusion for everything that deals with hostnames. Does anyone know how to stop ubuntu from advertising its hostname on a certain interface? Thanks!

    Read the article

  • User permissions linux. (proftpd / nginx)

    - by user55745
    I've been having a complete nightmare trying to configure proftpd. I've got proftp server working with an sql database. However I want to have any files uploaded able to viewed by the webserver running on the same box. The folders get created in /var/tmp/ as rwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:35 50730c4346512 drwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:38 50730f3a811ca I've tried adding www-data to group with the following usermod -g www-data ftpuser But this doesn't allow the web server access. In proftpd.conf I have the following umask set Umask 0022 It doesn't seem to make a difference what I set that value to. /etc/group (sure I've messed up one of these two but I'm getting desperate) ftpgroup:x:2001:www-data www-data:x:33:ftpgroup /etc/passwd www-data:x:33:33:www-data:/var/www:/bin/sh proftpd:x:108:65534::/var/run/proftpd:/bin/false ftp:x:109:65534::/srv/ftp:/bin/false ftpuser:x:2001:33:proftpd user www-data:/bin/null:/bin/false The ftpuser table in the database has uid / gid set to 2oo1 for both. I'm going absolutely crazy trying to solve this any help would be greatly appreciated. p.s Also, although if I manually connect to the ftp server I can upload files via FileZilla. Although this isn't working for the web-camera, although there is talky talky going on between the server and the camera.

    Read the article

  • The real differences between Red Hat Enterprise Linux (RHEL) and CentOS

    - by w00binda
    I know that "the vast majority of changes made (by CentOS team ndr) will be made to comply with the upstream vendor's re-distribution policies concerning trademarked names or logos" But I'm looking for a way to measure the real differences between RHEL and CentOS. My goal is to demonstrate "scientifically" that is possibile to use CentOS for development and testing purpose and use RHEL in the production environment. Any suggestion? Thanks in advance.

    Read the article

  • windows printer spool gets stuck with file at 64kb from linux and mac

    - by Juan Diego
    Hi I have two printers one on a file server with windows 2003, and other with windows XP. The thing is that when i try to print from my machine, my file stays in queue for ever, it says 64kb out of whatever the file i send. I have seen similar problems with some machines that run on Mac OS X. The windows machines apparently have no problems printing. They are not connect through active directory, just the network. In the past I have seen people install non microsoft windows Printer server on windows, i dont remember the name of any of the programs. I have being googling a lot and have not found anything to replace the microsoft print spooler service, maybe i am mistaken. Everyday I have to restart the print spooler service i even created a bat file for it. I am out of ideas here.

    Read the article

  • DVI output only working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >