Search Results

Search found 239 results on 10 pages for 'babyg wc'.

Page 9/10 | < Previous Page | 5 6 7 8 9 10  | Next Page >

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • Should I thread this?

    - by Psytronic
    I've got a "Loading Progress" WPF form which I instantiate when I start loading some results into my Main Window, and every time a result is loaded I want the progress bar to fill up by x amount (Based on the amount of results I'm loading). However what happens is that the Progress bar in the window stays blank the entire time, until the results have finished loading, then it will just display the full progress bar. Does this need threading to work properly? Or is it just to do with the way I'm trying to get it to work? //code snippet LoadingProgress lp = new LoadingProgress(feedCount); lp.Show(); foreach (FeedConfigGroup feed in _Feeds) { feed.insertFeeds(lp); } //part of insertFeeds(LoadingProgress lbBox) foreach (Feeds fd in _FeedSource) { lpBox.setText(fd.getName); XmlDocument feedResults = new XmlDocument(); feedResults.PreserveWhitespace = false; try { feedResults.Load(wc.OpenRead(fd.getURL)); } catch (WebException) { lpBox.addError(fd.getName); } foreach (XmlNode item in feedResults.SelectNodes("/rss/channel/item")) { //code for processing the nodes... } lpBox.progressIncrease(); } If more code is needed let me know.

    Read the article

  • Perl 500 internal server error. Something wrong with my code.

    - by Nitish
    I get a 500 internal server error when I try to run the code below in a web server which supports perl: #! /usr/bin/perl use LWP; my $ua = LWP::UserAgent->new; $ua->agent("TestApp/0.1 "); $ua->env_proxy(); my $req = HTTP::Request->new(POST => 'http://www.google.com/loc/json'); $req->content_type('application/jsonrequest'); $req->content('{ "cell_towers": [{"location_area_code": "55000", "mobile_network_code": "95", "cell_id": "20491", "mobile_country_code": "404"}], "version": "1.1.0", "request_address": "true"}'); my $res = $ua->request($req); if ($res->is_success) { print $res->content,"\n"; } else { print $res->status_line, "\n"; return undef; } But there is no error when I run the code below: #! /usr/bin/perl use CGI::Carp qw(fatalsToBrowser); print "Content-type: text/html\n\n"; print "<HTML>\n"; print "<HEAD><TITLE>Hello World!</TITLE></HEAD>\n"; print "<BODY>\n"; print "<H2>Hello World!</H2> <br /> \n"; foreach $key (sort keys(%ENV)) { print "$key = $ENV{$key}<p>" ; } print "</BODY>\n"; print "</HTML>\n"; So I think there is some problem with my code. When I run the first perl script in my local machine with the -wc command, it says that the syntax is OK. Help me please.

    Read the article

  • Capturing output of find . -print0 into a bash array

    - by Idris
    Using find . -print0 seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc. However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null: find . -print0 | perl -e '$/="\0"; @files=<>; print $#files;' This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with: find . | wc -l As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0 in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be. How can I do this? This doesn't work: find . -print0 | ( IFS=$'\0' ; array=( $( cat ) ) ; echo ${#array[@]} ) A much more general question might be: How can I do useful things with lists of files in bash?

    Read the article

  • BASH: How to count all the human readable files?

    - by user1687406
    I'm taking an intro course to UNIX and have a homework question that follows: How many files in the previous question are text files? A text file is any file containing human-readable content. (TRICK QUESTION. Run the file command on a file to see whether the file is a text file or a binary data file! If you simply count the number of files with the ".txt" extension you will get no points for this question.) The previous question simply asked how many regular files there were, which was easy to figure out by doing find . -type f | wc -l I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays. Maybe that's what the professor meant by saying "trick question"? This question has a follow up later that also asks "What text files contain the string "csc" in any mix of upper and lower case?". Obviously "text" is referring to more than just .txt files, but I need to figure out the first question to determine this!

    Read the article

  • Problem validating an XSD file: The content type of a derived type and that of its base must both be mixed or both be element-only

    - by Paulo Tavares
    Hi, I have following XML schema: <?xml version="1.0" encoding="UTF-8"?> <schema xmlns:netconf="urn:ietf:params:xml:ns:netconf:base:1.0" targetNamespace="urn:ietf:params:xml:ns:netconf:base:1.0" ... <complexType name="dataInlineType"> <xs:complexContent> <xs:extension base="xs:anyType"/> </xs:complexContent> </complexType> <complexType name="get-config_output_type__" > <complexContent> <extension base="netconf:dataInlineType"> <sequence> <element name="data"> <complexType> <sequence> <element name="__.get-config.output.data.A__" minOccurs="0" maxOccurs="unbounded" /> </sequence> </complexType> </element> <element name="__.get-config.A__" minOccurs="0" maxOccurs="unbounded"/> </sequence> </extension> </complexContent> And I getting the folling error: cos-ct-extends.1.4.3.2.2.1.a: The content type of a derived type and that of its base must both be mixed or both be element-only. Type 'get-config_output_type__' is element only, but its base type is not. If I put both elements mixed="true" I get another error: cos-nonambig: WC[##any] and "urn:ietf:params:xml:ns:netconf:base:1.0":data (or elements from their substitution group) violate "Unique Particle Attribution". During validation against this schema, ambiguity would be created for those two particles. I using the Eclipse to validate my schema, so what can I do?

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

  • grep simply fails when used on a few files

    - by Reid
    I've been trying for about the past 30 minutes to get this to work properly. grep is not exactly the most difficult thing to use, so I'm somewhat baffled as to why this won't work. The files I'm trying to use grep on are simple XHTML log files. Their names are in the format [email protected], though I don't think that should matter, and inside is simple XHTML. I copied one such log file to be testfile so you can see the output of some commands and why it's baffling to me: [~/.chatlogs_windows/dec] > whoami reid [~/.chatlogs_windows/dec] > type grep grep is /bin/grep [~/.chatlogs_windows/dec] > uname -a Linux reid-pc 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:32:27 UTC 2010 x86_64 GNU/Linux [~/.chatlogs_windows/dec] > head -1 /etc/issue Linux Mint 10 Julia [~/.chatlogs_windows/dec] > ls -Alh | grep testfile -rw-r--r-- 1 reid reid 63K 2011-01-10 12:45 testfile [~/.chatlogs_windows/dec] > tail -3 testfile </body> </html> [~/.chatlogs_windows/dec] > file testfile testfile: XML document text [~/.chatlogs_windows/dec] > grep html testfile [~/.chatlogs_windows/dec] > grep body testfile [~/.chatlogs_windows/dec] > grep "</html>" testfile [~/.chatlogs_windows/dec] > grep "</body>" testfile [~/.chatlogs_windows/dec] > cat testfile | grep html [~/.chatlogs_windows/dec] > cat testfile | wc -l 231 [~/.chatlogs_windows/dec] > cat testfile | tail -3 </body> </html> [~/.chatlogs_windows/dec] > chmod a+rw testfile && ls -Alh | grep testfile -rw-rw-rw- 1 reid reid 63K 2011-01-10 12:45 testfile [~/.chatlogs_windows/dec] > grep html testfile That's what I'm attempting to do. I want to just use grep -ri query . in ~/.chatlogs_windows, which normally works perfectly for me... but for some reason, it completely fails at going through these files. If it matters, I copied these files off of my Windows 7 partition. But I chown'd them and gave myself all the appropriate permissions, and other programs (like cat) seem to read them just fine. I also copied testfile to testfile_unix and converted the line endings and tried that, but it didn't work either. I'm using zsh, but I tried it on bash and that failed too. Also, grep works normally: I tried it out on my documents folder and it worked flawlessly. If you need any more information, just let me know. I tried googling around, but I found no reason for grep to simply not work. Thanks in advance.

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • mongodb : Can create new thread on FreeBSD?

    - by user197739
    We experienced some strange thing in our mongodb gridfs platform. The platform actually is a bi Xeon E5 (bi quad core) with 128GB of memory, running on freebsd 9 with a zfs pool dedicated for mongodb. [root@mongofile1 ~]# uname -sr FreeBSD 9.1-RELEASE our /boot/loader.conf vfs.zfs.arc_min="2048M" vfs.zfs.arc_max="7680M" vm.kmem_size_max="16G" vm.kmem_size="12G" vfs.zfs.prefetch_disable="1" kern.ipc.nmbclusters="32768" /etc/sysctl.conf net.inet.tcp.msl=15000 net.inet.tcp.keepidle=300000 kern.ipc.nmbclusters=32768 kern.ipc.maxsockbuf=2097152 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.sendspace=65535 net.inet.udp.recvspace=65535 net.inet.udp.maxdgram=57344 net.local.stream.recvspace=65535 net.local.stream.sendspace=65535 we follow the recommendation for the ulimit : [root@mongofile1 ~]# su - mongodb $ ulimit -a cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 5547 open files (-n) 32768 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited sbsize (bytes, -b) unlimited pseudo-terminals (-p) unlimited This server have a twin (same config exactly) for ReplSet in other data center and we have a virtualized arbiter. Some time, almost 3 days, the process of mongodb exit. The problem begin with: Fri Nov 8 11:27:31.741 [conn774697] end connection 192.168.10.162:47963 (23 connections now open) Fri Nov 8 11:27:31.770 [initandlisten] can't create new thread, closing connection Fri Nov 8 11:27:31.771 [rsHealthPoll] replSet member mongofile2:27017 is now in state DOWN Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.162:47968 #774702 (20 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.161:28522 #774703 (21 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.164:15406 #774704 (22 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.163:25750 #774705 (23 connections now open) Fri Nov 8 11:27:31.810 [initandlisten] connection accepted from 192.168.10.182:20779 #774706 (24 connections now open) Fri Nov 8 11:27:31.855 [initandlisten] connection accepted from 192.168.10.161:28524 #774707 (25 connections now open) Fri Nov 8 11:27:31.869 [initandlisten] connection accepted from 192.168.10.182:20786 #774708 (26 connections now open) and after many "can create new thread" [root@mongofile1 /usr/mongodb]# tail -n 15000 mongod.log.old |grep "create new thread"|wc 5020 55220 421680 and finish by a magnificent Fri Nov 8 11:30:22.333 [rsMgr] replSet warning caught unexpected exception in electSelf() pure virtual method called Fri Nov 8 11:30:22.333 Got signal: 6 (Abort trap: 6). Fri Nov 8 11:30:22.337 Backtrace: 0x599efc 0x8035cb516 0x599efc <_ZN5mongo10abruptQuitEi+988> at /usr/local/bin/mongod 0x8035cb516 <_pthread_sigmask+918> at /lib/libthr.so.3 Extract of mongodb from top 78126 mongodb 77 20 0 1253G 1449M sbwait 0 0:20 0.00% mongod If I restart the process when it crash, the problem is fixed for almost 3 days. Has anyone seen this before, or know of a fix?

    Read the article

  • 2xAMD Opteron 6128 with libvirt, Physical CPU 13 doesn't exist

    - by yak
    I need help with libvirt(?) problem. Server specs: ProLiant DL165 G7 2x AMD Opteron(tm) Processor 6128 System: Debian GNU/Linux testing (wheezy) 3.2.0-3-amd64 libvirt 0.9.12-5 kvm 1:1.1.2+dfsg-2 $ grep processor /proc/cpuinfo | wc -l 16 $ virsh nodeinfo setlocale: No such file or directory CPU model: x86_64 CPU(s): 16 CPU frequency: 800 MHz CPU socket(s): 2 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 66114200 KiB $ virsh capabilities .. <topology> <cells num='4'> <cell id='0'> <cpus num='4'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> </cpus> </cell> <cell id='1'> <cpus num='4'> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> <cell id='2'> <cpus num='4'> <cpu id='12'/> <cpu id='13'/> <cpu id='14'/> <cpu id='15'/> </cpus> </cell> <cell id='3'> <cpus num='4'> <cpu id='8'/> <cpu id='9'/> <cpu id='10'/> <cpu id='11'/> </cpus> </cell> </cells> </topology> .. $ virsh vcpupin vm 0 13,12,11,10,9,8,7,6,5 error: Physical CPU 13 doesn't exist. error: cpulist: Invalid format. Question? Why my VM Guests use only first 8 CPUs and next 8 are idling? $ for host in virsh list | awk '{print $2}'; do virsh vcpuinfo $host; done | grep ^CPU: | sort | uniq CPU: 0 CPU: 1 CPU: 2 CPU: 3 CPU: 4 CPU: 5 CPU: 6 CPU: 7 Any ideas how to change it?

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • The program is executing properly on dev C++ but is giving problem in Linux.The movement is becoming

    - by srinija
    #include<stdio.h> #include<GL/glut.h> GLfloat v[3][24]={{100.0,300.0,350.0,50.0,100.0,120.0,120.0,100.0,260.0,280.0, 280.0,260.0,140.0,160.0,160.0,140.0,180.0,200.0,200.0,180.0, 220.0,240.0,240.0,220.0},{100.0,100.0,200.0,200.0,160.0, 160.0,180.0,180.0,160.0,160.0,180.0,180.0,160.0,160.0,180.0, 180.0,160.0,160.0,180.0,180.0,160.0,160.0,180.0,180.0}, {1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0}}; GLfloat v1[3][16]={{50.0,350.0,350.0,50.0,100.0,300.0,300.0,100.0,125.0,175.0, 175.0,125.0,225.0,275.0,275.0,225.0},{200.0,200.0,210.0, 210.0,210.0,210.0,240.0,240.0,240.0,240.0,310.0,310.0,240.0, 240.0,310.0,310.0},{1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0}}; GLfloat colors[4][3]={{0.0,0.0,1.0},{0.9961,0.9961,0.65625},{1.0,0.0,1.0}, {1.0,.0,1.0}}; static float q,w,e; static float fq,fw,fe; static GLfloat wa=0,wb=0,wc=0,ba,bb,bc; int flag; void myinit(void) { glClearColor(0.506,.7,1,0.0); glPointSize(2.0); glLoadIdentity(); glOrtho(0.0,499.0,0.0,499.0,-300.0,300.0); } void draw_top_boxes(GLint i,GLint j) { glColor3f(1.0,0.0,0.0); glBegin(GL_POLYGON); glColor3fv(colors[j]); // to draw the boat glVertex2f(v1[0][i+0],v1[1][i+0]); glColor3fv(colors[j+1]); glVertex2f(v1[0][i+1],v1[1][i+1]); glColor3fv(colors[j+2]); glVertex2f(v1[0][i+2],v1[1][i+2]); glColor3fv(colors[j+3]); glVertex2f(v1[0][i+3],v1[1][i+3]); glEnd(); } void draw_polygon(GLint i) { glBegin(GL_POLYGON); // to draw the boat glColor3f(0.0,0.0,0.0); glColor3fv(colors[0]); glVertex2f(v[0][i+0],v[1][i+0]); glColor3fv(colors[1]); glVertex2f(v[0][i+1],v[1][i+1]); glColor3fv(colors[2]); glVertex2f(v[0][i+2],v[1][i+2]); glColor3fv(colors[3]); glVertex2f(v[0][i+3],v[1][i+3]); glEnd(); } void draw_boat() { draw_polygon(0); draw_polygon(4); draw_polygon(8); draw_polygon(12); draw_polygon(16); draw_polygon(20); draw_top_boxes(0,0); draw_top_boxes(4,0); draw_top_boxes(8,0); draw_top_boxes(12,0); glFlush(); glPopMatrix(); glPopMatrix(); } void draw_water() { GLfloat i; GLfloat x=0,y=103,j=0; GLfloat k; glPushMatrix(); glTranslatef(wa,wb,wc); glPushMatrix(); glColor3f(0,0,1); for(k=y;k>0;k-=6) { for(i=1;i<30;i++) { glBegin(GL_LINES); glVertex2f(j,k); glVertex2f(j+10,k); glEnd(); j=j+20; } j=0; } glPopMatrix(); glPopMatrix(); } void draw_fishes() { glPushMatrix(); glTranslatef(fq,12.0,fe); glPushMatrix(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(100,80); glVertex2f(100,60); glVertex2f(85,70); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(100,70); glVertex2f(110,75); glVertex2f(110,65); glEnd(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(90,71); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(200,80); glVertex2f(200,60); glVertex2f(185,70); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(200,70); glVertex2f(210,75); glVertex2f(210,65); glEnd(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(190,71); glEnd(); glPopMatrix(); glPopMatrix(); glFlush(); } void draw_cloud() { GLfloat m=100,n=400,o=10; for(int i=0;i<7;i++) { glPushMatrix(); glColor3f(1.0,1.0,1.0); if(i==1) glTranslated(125,415,10); else if(i==3||i==5) glTranslated(m,n+5,o); else glTranslated(m,n,o); glutSolidSphere(20.0,5000,150); glPopMatrix(); m+=10; } } void draw_square() { glColor3f(0,0.5,0.996); glBegin(GL_POLYGON); glVertex2f(0,0); glVertex2f(1000,0); glVertex2f(0,300); glVertex2f(1000,300); glEnd(); glFlush(); } void draw_brotate() { glPushMatrix(); glColor3f(0.96,0.5,0.25); //to draw body of the bird glTranslated(300,400,10); glScalef(3,1,1); glutSolidSphere(6,50000,15); glPopMatrix(); glPushMatrix(); glTranslated(323,400,10); glutSolidSphere(5,50000,15); glPopMatrix(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(325,401); glEnd(); glColor3f(0.96,0.5,0.25); //to draw wings glBegin(GL_LINES); glVertex2f(294,394); glVertex2f(286,389); glEnd(); glBegin(GL_LINES); glVertex2f(286,389); glVertex2f(295,391); glEnd(); glBegin(GL_LINES); glVertex2f(295,391); glVertex2f(285,385); glEnd(); glBegin(GL_LINES); glVertex2f(285,385); glVertex2f(309,395); glEnd(); glBegin(GL_LINES); glVertex2f(294,406); glVertex2f(286,411); glEnd(); glBegin(GL_LINES); glVertex2f(286,411); glVertex2f(295,409); glEnd(); glBegin(GL_LINES); glVertex2f(295,409); glVertex2f(285,415); glEnd(); glBegin(GL_LINES); glVertex2f(285,415); glVertex2f(309,406); glEnd(); glColor3f(0.96,0.5,0.25); } void draw_bird() { GLfloat x=200,y=400,z=10; draw_brotate(); glBegin(GL_LINES); //draw legs of the bird glVertex2f(285,402); glVertex2f(275,402); glEnd(); glBegin(GL_LINES); glVertex2f(285,398); glVertex2f(275,398); glEnd(); glBegin(GL_LINES); glVertex2f(275,402); glVertex2f(270,405); glEnd(); glBegin(GL_LINES); glVertex2f(275,402); glVertex2f(270,398); glEnd(); glBegin(GL_LINES); glVertex2f(275,398); glVertex2f(273,400); glEnd(); glBegin(GL_LINES); glVertex2f(275,398); glVertex2f(270,395); glEnd(); glBegin(GL_LINES); glVertex2f(323,405); glVertex2f(323,407); glEnd(); glPushMatrix(); glTranslatef(323,409,10); glutSolidSphere(2,200,20); glPopMatrix(); glBegin(GL_TRIANGLES); glVertex2f(328,400); glVertex2f(331,397); glVertex2f(327,398.5); glEnd(); glFlush(); } void drawstars() { glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex3f(300.0,400.0,10.0); glVertex3f(200,400.0,10.0); glVertex3f(150,450.0,10.0); glVertex3f(100,470.0,10.0); glVertex3f(50,450.0,10.0); glVertex3f(50,350.0,10.0); glVertex3f(90,365.0,10.0); glVertex3f(350,450.0,10.0); glVertex3f(275,470.0,10.0); glVertex3f(280,430.0,10.0); glVertex3f(250,400.0,10.0); glVertex3f(450,450.0,10.0); glVertex3f(430,430.0,10.0); glVertex3f(430,470.0,10.0); glVertex3f(300,450.0,10.0); glVertex3f(265,380.0,10.0); glVertex3f(235,450.0,10.0); glEnd(); } void draw_all() { glClear(GL_COLOR_BUFFER_BIT); if(flag==0) { glDisable(GL_LIGHTING); //immp one draw_square(); draw_cloud(); glClearColor(0.506,.7,1,0.0); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glPushMatrix(); glColor3f(1.0,1.0,0.0); glTranslated(400,400,10); glutSolidSphere(20.0,5000,150); glPopMatrix(); } if(flag==1) { glDisable(GL_LIGHTING); //imp one draw_square(); draw_cloud(); glClearColor(0.9960,0.7070,0.3164,0.0); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glPushMatrix(); glColor3f(1.0,1.0,0.0); glTranslated(400,400,10); glutSolidSphere(20.0,500,100); glPopMatrix(); } if(flag==2) { // just try and change values in these arrays, specially the position array drawstars(); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); // GLfloat emission[]={0.1,0.1,0.1,0.0}; GLfloat diffuse[] = { 0.40, 0.40,0.40, 1.0 }; GLfloat ambiance[] = { 0.5, 0.5,0.5, 1.0 }; GLfloat specular[] = { 1.3, 1.3,.3, 1.0 }; GLfloat intensity[]={500.0}; GLfloat position[] = { 10,30,-30,1.0 }; glLightfv (GL_LIGHT0, GL_POSITION, position); glLightfv (GL_LIGHT0, GL_DIFFUSE,diffuse); glLightfv (GL_LIGHT0, GL_AMBIENT,ambiance); glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER,GL_TRUE); glLightfv (GL_LIGHT0, GL_SPECULAR,specular); glLightfv (GL_LIGHT0, GL_INTENSITY,intensity); glColor3f(0,0.5,0.996); glBegin(GL_POLYGON); glVertex2f(0,0); glVertex2f(1000,0); glVertex2f(0,150); glVertex2f(1000,150); glEnd(); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glDisable(GL_LIGHTING); glDisable(GL_LIGHT0); draw_cloud(); glClearColor(0.0,0.0,0.0,0.0); glPushMatrix(); glColor3f(1.0,1.0,1.0); glTranslated(400,400,10); glutSolidSphere(20.0,500,100); glPopMatrix(); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex3f(300.0,400.0,10.0); glEnd(); } glPushMatrix(); glTranslatef(ba,bb,bc); glPushMatrix(); draw_bird(); glPopMatrix(); glPopMatrix(); GLfloat i; glPushMatrix(); GLfloat x=0,y=100,j=0; int k; //draw_water(); Sleep(60); q+=5; fq-=3.5; if(q>=440.0) //470 q=-390.0; //400 if(fq<=-300) //500 fq=400.0; //400 wa-=1; if(wa<=(-20)) wa=-0.5; ba+=6; if(ba>=500) ba=-400; glFlush(); glutSwapBuffers(); } void display(void) { draw_all(); } void color_menu(int id) { switch(id) { case 1: flag=0;break; case 2: flag=1;break; case 3: flag=2;break; case 4: exit(0); break; } glutPostRedisplay(); } void main_menu(int id) { switch(id) { case 1: break; case 2:exit(0);break; glutPostRedisplay(); } } int main(int argc,char **argv) { int sub_menu; glutInit(&argc,argv); glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE); glutInitWindowSize(1000,1000); glutInitWindowPosition(0,0); glutCreateWindow("Ship"); sub_menu=glutCreateMenu(color_menu); glutAddMenuEntry("Morning",1); glutAddMenuEntry("Evening",2); glutAddMenuEntry("Night",3); glutAddMenuEntry("Quit",4); glutCreateMenu(main_menu); glutAddSubMenu("View",sub_menu); glutAddMenuEntry("Quit",2); glutAttachMenu(GLUT_RIGHT_BUTTON); glutDisplayFunc(display); glutIdleFunc(display); myinit(); glutMainLoop(); glFlush(); }

    Read the article

  • This code is working properly in Dev C++ .But on Linux platform it is giving problem with the moveme

    - by srinija
    #include<stdio.h> #include<GL/glut.h> #include<stdlib.h> GLfloat v[3][24]={{100.0,300.0,350.0,50.0,100.0,120.0,120.0,100.0,260.0,280.0, 280.0,260.0,140.0,160.0,160.0,140.0,180.0,200.0,200.0,180.0, 220.0,240.0,240.0,220.0},{100.0,100.0,200.0,200.0,160.0, 160.0,180.0,180.0,160.0,160.0,180.0,180.0,160.0,160.0,180.0, 180.0,160.0,160.0,180.0,180.0,160.0,160.0,180.0,180.0}, {1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0}}; GLfloat v1[3][16]={{50.0,350.0,350.0,50.0,100.0,300.0,300.0,100.0,125.0,175.0, 175.0,125.0,225.0,275.0,275.0,225.0},{200.0,200.0,210.0, 210.0,210.0,210.0,240.0,240.0,240.0,240.0,310.0,310.0,240.0, 240.0,310.0,310.0},{1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0}}; GLfloat colors[4][3]={{0.0,0.0,1.0},{0.9961,0.9961,0.65625},{1.0,0.0,1.0}, {1.0,.0,1.0}}; static float q,w,e; static float fq,fw,fe; static GLfloat wa=0,wb=0,wc=0,ba,bb,bc; int flag; void myinit(void) { glClearColor(0.506,.7,1,0.0); glPointSize(2.0); glLoadIdentity(); glOrtho(0.0,499.0,0.0,499.0,-300.0,300.0); } void draw_top_boxes(GLint i,GLint j) { glColor3f(1.0,0.0,0.0); glBegin(GL_POLYGON); glColor3fv(colors[j]); // to draw the boat glVertex2f(v1[0][i+0],v1[1][i+0]); glColor3fv(colors[j+1]); glVertex2f(v1[0][i+1],v1[1][i+1]); glColor3fv(colors[j+2]); glVertex2f(v1[0][i+2],v1[1][i+2]); glColor3fv(colors[j+3]); glVertex2f(v1[0][i+3],v1[1][i+3]); glEnd(); } void draw_polygon(GLint i) { glBegin(GL_POLYGON); // to draw the boat glColor3f(0.0,0.0,0.0); glColor3fv(colors[0]); glVertex2f(v[0][i+0],v[1][i+0]); glColor3fv(colors[1]); glVertex2f(v[0][i+1],v[1][i+1]); glColor3fv(colors[2]); glVertex2f(v[0][i+2],v[1][i+2]); glColor3fv(colors[3]); glVertex2f(v[0][i+3],v[1][i+3]); glEnd(); } void draw_boat() { draw_polygon(0); draw_polygon(4); draw_polygon(8); draw_polygon(12); draw_polygon(16); draw_polygon(20); draw_top_boxes(0,0); draw_top_boxes(4,0); draw_top_boxes(8,0); draw_top_boxes(12,0); glFlush(); glPopMatrix(); glPopMatrix(); } void draw_water() { GLfloat i; GLfloat x=0,y=103,j=0; GLfloat k; glPushMatrix(); glTranslatef(wa,wb,wc); glPushMatrix(); glColor3f(0,0,1); for(k=y;k>0;k-=6) { for(i=1;i<30;i++) { glBegin(GL_LINES); glVertex2f(j,k); glVertex2f(j+10,k); glEnd(); j=j+20; } j=0; } glPopMatrix(); glPopMatrix(); } void draw_fishes() { glPushMatrix(); glTranslatef(fq,12.0,fe); glPushMatrix(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(100,80); glVertex2f(100,60); glVertex2f(85,70); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(100,70); glVertex2f(110,75); glVertex2f(110,65); glEnd(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(90,71); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(200,80); glVertex2f(200,60); glVertex2f(185,70); glEnd(); glColor3f(.99609375,0.2578125,0.2578125); glBegin(GL_TRIANGLES); glVertex2f(200,70); glVertex2f(210,75); glVertex2f(210,65); glEnd(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(190,71); glEnd(); glPopMatrix(); glPopMatrix(); glFlush(); } void draw_cloud() { GLfloat m=100,n=400,o=10; for(int i=0;i<7;i++) { glPushMatrix(); glColor3f(1.0,1.0,1.0); if(i==1) glTranslated(125,415,10); else if(i==3||i==5) glTranslated(m,n+5,o); else glTranslated(m,n,o); glutSolidSphere(20.0,5000,150); glPopMatrix(); m+=10; } } void draw_square() { glColor3f(0,0.5,0.996); glBegin(GL_POLYGON); glVertex2f(0,0); glVertex2f(1000,0); glVertex2f(0,300); glVertex2f(1000,300); glEnd(); glFlush(); } void draw_brotate() { glPushMatrix(); glColor3f(0.96,0.5,0.25); //to draw body of the bird glTranslated(300,400,10); glScalef(3,1,1); glutSolidSphere(6,50000,15); glPopMatrix(); glPushMatrix(); glTranslated(323,400,10); glutSolidSphere(5,50000,15); glPopMatrix(); glColor3f(0,0,0); glBegin(GL_POINTS); glVertex2f(325,401); glEnd(); glColor3f(0.96,0.5,0.25); //to draw wings glBegin(GL_LINES); glVertex2f(294,394); glVertex2f(286,389); glEnd(); glBegin(GL_LINES); glVertex2f(286,389); glVertex2f(295,391); glEnd(); glBegin(GL_LINES); glVertex2f(295,391); glVertex2f(285,385); glEnd(); glBegin(GL_LINES); glVertex2f(285,385); glVertex2f(309,395); glEnd(); glBegin(GL_LINES); glVertex2f(294,406); glVertex2f(286,411); glEnd(); glBegin(GL_LINES); glVertex2f(286,411); glVertex2f(295,409); glEnd(); glBegin(GL_LINES); glVertex2f(295,409); glVertex2f(285,415); glEnd(); glBegin(GL_LINES); glVertex2f(285,415); glVertex2f(309,406); glEnd(); glColor3f(0.96,0.5,0.25); } void draw_bird() { GLfloat x=200,y=400,z=10; draw_brotate(); glBegin(GL_LINES); //draw legs of the bird glVertex2f(285,402); glVertex2f(275,402); glEnd(); glBegin(GL_LINES); glVertex2f(285,398); glVertex2f(275,398); glEnd(); glBegin(GL_LINES); glVertex2f(275,402); glVertex2f(270,405); glEnd(); glBegin(GL_LINES); glVertex2f(275,402); glVertex2f(270,398); glEnd(); glBegin(GL_LINES); glVertex2f(275,398); glVertex2f(273,400); glEnd(); glBegin(GL_LINES); glVertex2f(275,398); glVertex2f(270,395); glEnd(); glBegin(GL_LINES); glVertex2f(323,405); glVertex2f(323,407); glEnd(); glPushMatrix(); glTranslatef(323,409,10); glutSolidSphere(2,200,20); glPopMatrix(); glBegin(GL_TRIANGLES); glVertex2f(328,400); glVertex2f(331,397); glVertex2f(327,398.5); glEnd(); glFlush(); } void drawstars() { glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex3f(300.0,400.0,10.0); glVertex3f(200,400.0,10.0); glVertex3f(150,450.0,10.0); glVertex3f(100,470.0,10.0); glVertex3f(50,450.0,10.0); glVertex3f(50,350.0,10.0); glVertex3f(90,365.0,10.0); glVertex3f(350,450.0,10.0); glVertex3f(275,470.0,10.0); glVertex3f(280,430.0,10.0); glVertex3f(250,400.0,10.0); glVertex3f(450,450.0,10.0); glVertex3f(430,430.0,10.0); glVertex3f(430,470.0,10.0); glVertex3f(300,450.0,10.0); glVertex3f(265,380.0,10.0); glVertex3f(235,450.0,10.0); glEnd(); } void draw_all() { glClear(GL_COLOR_BUFFER_BIT); if(flag==0) { glDisable(GL_LIGHTING); //immp one draw_square(); draw_cloud(); glClearColor(0.506,.7,1,0.0); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glPushMatrix(); glColor3f(1.0,1.0,0.0); glTranslated(400,400,10); glutSolidSphere(20.0,5000,150); glPopMatrix(); } if(flag==1) { glDisable(GL_LIGHTING); //imp one draw_square(); draw_cloud(); glClearColor(0.9960,0.7070,0.3164,0.0); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glPushMatrix(); glColor3f(1.0,1.0,0.0); glTranslated(400,400,10); glutSolidSphere(20.0,500,100); glPopMatrix(); } if(flag==2) { // just try and change values in these arrays, specially the position array drawstars(); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); // GLfloat emission[]={0.1,0.1,0.1,0.0}; GLfloat diffuse[] = { 0.40, 0.40,0.40, 1.0 }; GLfloat ambiance[] = { 0.5, 0.5,0.5, 1.0 }; GLfloat specular[] = { 1.3, 1.3,.3, 1.0 }; GLfloat intensity[]={500.0}; GLfloat position[] = { 10,30,-30,1.0 }; glLightfv (GL_LIGHT0, GL_POSITION, position); glLightfv (GL_LIGHT0, GL_DIFFUSE,diffuse); glLightfv (GL_LIGHT0, GL_AMBIENT,ambiance); glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER,GL_TRUE); glLightfv (GL_LIGHT0, GL_SPECULAR,specular); glLightfv (GL_LIGHT0, GL_INTENSITY,intensity); glColor3f(0,0.5,0.996); glBegin(GL_POLYGON); glVertex2f(0,0); glVertex2f(1000,0); glVertex2f(0,150); glVertex2f(1000,150); glEnd(); glTranslatef(q,w,e); glPushMatrix(); glColor3f(1.0,0.0,0.0); draw_boat(); draw_fishes(); glDisable(GL_LIGHTING); glDisable(GL_LIGHT0); draw_cloud(); glClearColor(0.0,0.0,0.0,0.0); glPushMatrix(); glColor3f(1.0,1.0,1.0); glTranslated(400,400,10); glutSolidSphere(20.0,500,100); glPopMatrix(); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex3f(300.0,400.0,10.0); glEnd(); } glPushMatrix(); glTranslatef(ba,bb,bc); glPushMatrix(); draw_bird(); glPopMatrix(); glPopMatrix(); GLfloat i; glPushMatrix(); GLfloat x=0,y=100,j=0; int k; //draw_water(); q+=25; fq-=3.5; if(q>=440.0) //470 q=-390.0; //400 if(fq<=-300) //500 fq=400.0; //400 wa-=1; if(wa<=(-20)) wa=-0.5; ba+=6; if(ba>=500) ba=-400; glFlush(); glutSwapBuffers(); } void display(void) { draw_all(); } void color_menu(int id) { switch(id) { case 1: flag=0;break; case 2: flag=1;break; case 3: flag=2;break; case 4: exit(0); break; } glutPostRedisplay(); } void main_menu(int id) { switch(id) { case 1: break; case 2:exit(0);break; glutPostRedisplay(); } } int main(int argc,char **argv) { int sub_menu; glutInit(&argc,argv); glutInitDisplayMode(GLUT_RGB|GLUT_DOUBLE); glutInitWindowSize(1000,1000); glutInitWindowPosition(0,0); glutCreateWindow("Ship"); sub_menu=glutCreateMenu(color_menu); glutAddMenuEntry("Morning",1); glutAddMenuEntry("Evening",2); glutAddMenuEntry("Night",3); glutAddMenuEntry("Quit",4); glutCreateMenu(main_menu); glutAddSubMenu("View",sub_menu); glutAddMenuEntry("Quit",2); glutAttachMenu(GLUT_RIGHT_BUTTON); glutDisplayFunc(display); glutIdleFunc(display); myinit(); glutMainLoop(); glFlush(); }

    Read the article

  • My Interview with Microsoft

    - by Victor Hurdugaci
    This post is for those who want to apply or have already applied (but not finished the interview) for a Microsoft Job. The recruitment process is quite similar for everyone and consists of a few steps. Application E-Mail Interview Phone Interview On Site Interview I will tell you my story and how I went through the four phases. 1. Application My blog's title (Ex Nihilo Nihil Fit) means "Nothing Comes Out of Nothing". You can't get a job at Microsoft by not doing anything - this is true for anything else. The first step you need to complete is the application process. For this, many options are available. You can... ... apply online on Microsoft's Careers website as I did ... send your CV to different e-mail addresses (there are some dedicated e-mails for different positions) ... apply through some 3rd party organization (job shop, campus recruitment, job agency, etc) On MS Careers you just have to post your CV and choose the job you want. That's all! No recommendation letter, no cover letter, no nothing. Of course, not every CV passes the selection process. Here are some tips for improving your resume (worked for me): Don't write it just before applying! Write a draft version, wait a few days and then review it. This way you will find a lot of mistakes and stupid things you wrote initially. If you review it immediately after writing, your mind will not be criticism oriented and will just ignore mistakes. Repeat the write-wait-review process as many times as necessary, until you find that the review revealed no mistakes. After you did the final review and the CV is bullet-proof, ask others to review it. They will definitely find inconsistencies and mistakes and this will make you feel stupid. This is good because will open your eyes will make you go into an 'I want to improve' mode. You'll try to correct everything. After you come up with a modified version go again through steps 1 and 2. Repeat this as many times as necessary. [Special thanks to Lucian Sasu, Nadia Comanici, Andrei Ciobanu, Monica Balan and Lavinia Tanase for reviewing my CV!] Make it short and give only relevant facts. Initially, I come up with a 5 pages CV because I wrote every single technology with which I worked. There were a lot irrelevant things, I wrote Windows Workflow Foundation just because I played with it for a few days. I added extensive descriptions for every project, made a personal details section (name, birth date, address, etc) of 1/2 page. Others suggested to cut everything that was not necessary. You don't need to give extensive descriptions, just add a few words. For example, I wrote "VS Image Visualizer - Visual Studio 2008 debug visualizer for images" and added a link to the project's page - you submit formatted andcan embed links. Add something that makes it different. I don't know if this makes a difference, but I added some lines to separate items just like in the picture below. Definitely Microsoft gets thousands of CVs per day. You need something special. Don't lie! Tell exactly what you did and what is the proficiency level of your skills. For example, don't write "Advanced" for UML if you don't know the difference between composition and aggregation. Be realistic and don't under/over estimate yourself. Use the spell chick. Make sure everything is written in correct English and there are no grammar/spelling mistakes. Noddy likes a WC with grammar mi takes. You mght fail just because of that. Once you completed your CV, choose the job that suits best your needs, apply and wait... The waiting is a problem because all these big companies like Microsoft, Google, Mozilla, Apple, etc. will contact you only if they find something interesting in your application. If you're not suitable, then no rejection is sent. I applied for an Intern Software Development Engineer position at Microsoft Redmond. I cannot apply for a full time position because I want to finish the master program on time, in the next summer - an internship is just what I need. 2. E-Mail Interview January 20, 2010. Two months since I submitted the CV. I wasn't hoping anymore that MS will contact me, when I got an e-mail titled: "Victor Hurdugaci ES DK" from Holly Peterson saying: Read more >>

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • Nodemanager Init.d Script

    - by john.graves(at)oracle.com
    I’ve seen many of these floating around.  This is my favourite on an Ubuntu based machine. Just throw it into the /etc/init.d directory and update the following lines: export MW_HOME=/opt/app/wls10.3.4 user='weblogic' Then run: update-rc.d nodemanager default Everything else should be ok for 10.3.4. #!/bin/sh # ### BEGIN INIT INFO # Provides: nodemanager # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: WebLogic Nodemanager ### END INIT INFO # nodemgr Oracle Weblogic NodeManager service # # chkconfig: 345 85 15 # description: Oracle Weblogic NodeManager service # ### BEGIN INIT INFO # Provides: nodemgr # Required-Start: $network $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: Oracle Weblogic NodeManager service. # Description: Starts and stops Oracle Weblogic NodeManager. ### END INIT INFO # Source function library. . /lib/lsb/init-functions # set Weblogic environment defining CLASSPATH and LD_LIBRARY_PATH # to start/stop various components. export MW_HOME=/opt/app/wls10.3.4 # # Note: # The setWLSEnv.sh not only does a good job of setting the environment, # but also advertises the fact explicitly in the console! Silence it. # . $MW_HOME/wlserver_10.3/server/bin/setWLSEnv.sh > /dev/null # set NodeManager environment export NodeManagerHome=$WL_HOME/common/nodemanager NodeManagerLockFile=$NodeManagerHome/nodemanager.log.lck # check JAVA_HOME if [ -z ${JAVA_HOME:-} ]; then export JAVA_HOME=/opt/sun/products/java/jdk1.6.0_18 fi exec=$MW_HOME/wlserver_10.3/server/bin/startNodeManager.sh prog='nodemanager' user='weblogic' is_nodemgr_running() { local nodemgr_cnt=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ wc -l` echo $nodemgr_cnt } get_nodemgr_pid() { nodemgr_pid=0 if [ `is_nodemgr_running` -eq 1 ]; then nodemgr_pid=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ tr -s ' ' | \ cut -d' ' -f2` fi echo $nodemgr_pid } check_nodemgr_status () { local retval=0 local nodemgr_cnt=`is_nodemgr_running` if [ $nodemgr_cnt -eq 0 ]; then if [ -f $NodeManagerLockFile ]; then retval=2 else retval=3 fi elif [ $nodemgr_cnt -gt 1 ]; then retval=4 else retval=0 fi echo $retval } start() { ulimit -n 65535 [ -x $exec ] || exit 5 echo -n $"Starting $prog: " su $user -c "$exec &" retval=$? echo return $retval } stop() { echo -n $"Stopping $prog: " kill -s 9 `get_nodemgr_pid` &> /dev/null retval=$? echo [ $retval -eq 0 ] && rm -f $NodeManagerLockFile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { local retval=`check_nodemgr_status` if [ $retval -eq 0 ]; then echo "$prog (pid:`get_nodemgr_pid`) is running..." elif [ $retval -eq 4 ]; then echo "Multiple instances of $prog are running..." else echo "$prog is stopped" fi return $retval } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo -n "Usage: $0 {" echo -n "start|" echo -n "stop|" echo -n "status|" echo -n "restart|" echo -n "condrestart|" echo -n "try-restart|" echo -n "reload|" echo -n "force-reload" echo "}" exit 2 esac exit $? .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • WCF web service: response is 200/ok, but response body is empty

    - by user1021224
    I am creating a WCF web api service. My problem is that some methods return a 200/OK response, but the headers and the body are empty. In setting up my web service, I created an ADO.NET Entity Data Model. I chose ADO.NET DbContext Generator when I added a code generation item. In the Model.tt document, I changed HashSet and ICollection to List. I built my website. It used to be that when I coded a method to return a List of an entity (like List<Customer> or List<Employee> in the Northwind database), it worked fine. Over time, I could not return a List of any of those, and could only grab one entity. Now, it's gotten to a point where I can return a List<string> or List<int>, but not a List or an instance of any entity. When I try to get a List<AnyEntity>, the response is 200/OK, but the response headers and body are empty. I have tried using the debugger and Firefox's Web Console. Using FF's WC, I could only get an "undefined" status code. I am not sure where to go from here. EDIT: In trying to grab all Areas from the database, I do this: [WebGet(UriTemplate = "areas")] public List<a1Areas> AllAreas() { return context.a1Areas.ToList(); } I would appreciate any more methods for debugging this. Thanks in advance. Found the answer, thanks to Merlyn! In my Global.asax file, I forgot to comment out two lines that took care of proxies and disposing of my context object. The code is below: void Application_BeginRequest(object sender, EventArgs e) { var context = new AssignmentEntities(); context.Configuration.ProxyCreationEnabled = false; HttpContext.Current.Items["_context"] = context; } void Application_EndRequest(object sender, EventArgs e) { var context = HttpContext.Current.Items["_context"] as AssignmentEntities; if (context != null) { context.Dispose(); } }

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • Postfix logs missing information on delivery status (postfix/smtp message)

    - by hegemon
    I noticed a problem with postfix logs, that information on some of emails delivery status is missing. The issue affects about 1% of emails. "Healthy" log: <server># grep 8EB992EFBB44 postfix_log/mail04.log Jun 5 03:09:29 mail04 postfix/smtpd[8537]: 8EB992EFBB44: client=xxx.xxx.xxx[xxx.xxx.xxx.xxx] Jun 5 03:09:29 mail04 postfix/cleanup[34349]: 8EB992EFBB44: message-id=<[email protected]> Jun 5 03:12:02 mail04 postfix/qmgr[76377]: 8EB992EFBB44: from=<[email protected]>, size=48845, nrcpt=1 (queue active) Jun 5 03:15:12 mail04 postfix/smtp[35058]: 8EB992EFBB44: to=<[email protected]>, relay=mx.baz.com[xxx.xxx.xxx.xxx]:25, conn_use=70, delay=343, delays=153/190/0/0.24, dsn=2.0.0, status=sent (250 ok) Jun 5 03:15:12 mail04 postfix/qmgr[76377]: 8EB992EFBB44: removed "Broken" log: <server># grep F3C362EF37CA postfix_log/mail04.log Jun 5 04:03:27 mail04 postfix/smtpd[39666]: F3C362EF37CA: client=xxx.xxx.xxx[xxx.xxx.xxx.xxx] Jun 5 04:03:27 mail04 postfix/cleanup[41287]: F3C362EF37CA: message-id=<[email protected]> Jun 5 04:03:28 mail04 postfix/qmgr[76377]: F3C362EF37CA: from=<[email protected]>, size=48892, nrcpt=1 (queue active) ** here should be a log line from postfix/smtp but there is none ** Jun 5 04:03:29 mail04 postfix/qmgr[76377]: F3C362EF37CA: removed Background information: system: FreeBSD xxx.xxx.xxx 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 Postfix is installed inside jail. Logs are on the same machine, log dir is mounted thru nullfs. The site has spikes of heavy load, causing disks (local) to run at 100%. Update The log is rotated daily, current size is ~ 500MB. I made a test by queuing 99000 messages to same destination (in order to rule out dns/network/mx issues). 5715 messages don't have any DSN record. Failed messages queue time is spread evenly over time, i don't see any time-bound issues. Some undelivered emails: envelopeid | processed_time --------------+---------------------------- 8D7652EF3BAE | 2012-06-06 13:19:11.072715 DD53A2EF3C5C | 2012-06-06 13:33:24.374783 8C52F2EF4E3F | 2012-06-06 13:39:15.810616 BBC572EF525C | 2012-06-06 13:44:22.762812 E95822EF54D1 | 2012-06-06 13:52:01.134533 839DD2EF4FBB | 2012-06-06 14:13:48.511236 017EE2EF6234 | 2012-06-06 15:04:48.618963 Those are a few picks, such records of undelivered email occur almost every second. <server># egrep '(8D7652EF3BAE|BBC572EF525C|017EE2EF6234)' mail04.log Jun 6 13:19:10 mail04 postfix/smtpd[20350]: 8D7652EF3BAE: client=xxx.xxx.xxx[xxx.xxx.xxx.xxx] Jun 6 13:19:10 mail04 postfix/cleanup[21024]: 8D7652EF3BAE: message-id=<[email protected]> Jun 6 13:19:10 mail04 postfix/qmgr[7939]: 8D7652EF3BAE: from=<[email protected]>, size=63718, nrcpt=1 (queue active) Jun 6 13:19:11 mail04 postfix/qmgr[7939]: 8D7652EF3BAE: removed Jun 6 13:44:22 mail04 postfix/smtpd[20346]: BBC572EF525C: client=xxx.xxx.xxx[xxx.xxx.xxx.xxx] Jun 6 13:44:22 mail04 postfix/cleanup[24811]: BBC572EF525C: message-id=<[email protected]> Jun 6 13:44:22 mail04 postfix/qmgr[7939]: BBC572EF525C: from=<[email protected]>, size=63758, nrcpt=1 (queue active) Jun 6 15:04:49 mail04 postfix/smtpd[20344]: 017EE2EF6234: client=xxx.xxx.xxx[xxx.xxx.xxx.xxx] Jun 6 15:04:49 mail04 postfix/cleanup[35585]: 017EE2EF6234: message-id=<[email protected]> Jun 6 15:04:49 mail04 postfix/qmgr[7939]: 017EE2EF6234: from=<[email protected]>, size=63706, nrcpt=1 (queue active) <server># <server># find /var/spool/postfix/active/ -type f -print | wc -l 1 <server># IMPORTANT: As you can see above some of emails doesn't event have the removed line.

    Read the article

< Previous Page | 5 6 7 8 9 10  | Next Page >