Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 656/1115 | < Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >

  • Is there any way to kill a zombie process without reboot?

    - by Pedram
    Is there any way to kill a zombie process without reboot?Here is how it happens: I wanted to download a 12GB torrent.After adding the .torrent file, transmission turned into a zombie process.I tried ktorrent too.Same behavior.Finally I could download the file using µTorrent but after closing the program, it turns into a zombie as well. I tried using kill skill and pkill with different options and -9 signal but no success. In some answers in web I found out killing the parent can kill the zombie, but killing wine didn't help either. Is there another way? Edit: ps -o pid,ppid,stat,comm PID PPID STAT COMMAND 7121 2692 Ss bash 7317 7121 R+ ps pstree output: init---GoogleTalkPlugi---4*[{GoogleTalkPlug}] +-NetworkManager---dhclient ¦ +-{NetworkManager} +-acpid +-apache2---5*[apache2] +-atd +-avahi-daemon---avahi-daemon +-bonobo-activati---{bonobo-activat} +-clock-applet +-console-kit-dae---63*[{console-kit-da}] +-cron +-cupsd +-2*[dbus-daemon] +-2*[dbus-launch] +-desktopcouch-se---desktopcouch-se +-explorer.exe +-firefox---run-mozilla.sh---firefox-bin---plugin-containe---8*[{plugin-contain}] ¦ +-14*[{firefox-bin}] +-gconfd-2 +-gdm-binary---gdm-simple-slav---Xorg ¦ ¦ +-gdm-session-wor---gnome-session---bluetooth-apple ¦ ¦ ¦ ¦ +-fusion-icon---compiz---sh---gtk-window-deco ¦ ¦ ¦ ¦ +-gdu-notificatio ¦ ¦ ¦ ¦ +-gnome-panel ¦ ¦ ¦ ¦ +-gnome-power-man ¦ ¦ ¦ ¦ +-gpg-agent ¦ ¦ ¦ ¦ +-nautilus---bash ¦ ¦ ¦ ¦ ¦ +-{nautilus} ¦ ¦ ¦ ¦ +-nm-applet ¦ ¦ ¦ ¦ +-polkit-gnome-au ¦ ¦ ¦ ¦ +-2*[python] ¦ ¦ ¦ ¦ +-qstardict---{qstardict} ¦ ¦ ¦ ¦ +-ssh-agent ¦ ¦ ¦ ¦ +-tracker-applet ¦ ¦ ¦ ¦ +-trackerd ¦ ¦ ¦ ¦ +-wakoopa---wakoopa ¦ ¦ ¦ ¦ ¦ +-3*[{wakoopa}] ¦ ¦ ¦ ¦ +-{gnome-session} ¦ ¦ ¦ +-{gdm-session-wo} ¦ ¦ +-{gdm-simple-sla} ¦ +-{gdm-binary} +-6*[getty] +-gnome-keyring-d---2*[{gnome-keyring-}] +-gnome-screensav +-gnome-settings- +-gnome-system-mo---{gnome-system-m} +-gnome-terminal---bash---ssh ¦ +-bash---pstree ¦ +-gnome-pty-helpe ¦ +-{gnome-terminal} +-gvfs-afc-volume---{gvfs-afc-volum} +-gvfs-fuse-daemo---3*[{gvfs-fuse-daem}] +-gvfs-gdu-volume +-gvfsd +-gvfsd-burn +-gvfsd-http +-gvfsd-metadata +-gvfsd-trash +-hald---hald-runner---hald-addon-acpi ¦ ¦ +-hald-addon-cpuf ¦ ¦ +-hald-addon-inpu ¦ ¦ +-hald-addon-stor ¦ +-{hald} +-hotot---xdg-open ¦ +-3*[{hotot}] +-indicator-apple +-indicator-me-se +-indicator-sessi +-irqbalance +-kded4 +-kdeinit4---kio_http_cache_ ¦ +-klauncher +-kglobalaccel +-knotify4 +-modem-manager +-multiload-apple +-mysqld---10*[{mysqld}] +-named---10*[{named}] +-nmbd +-notification-ar +-notify-osd +-pidgin---{pidgin} +-polkitd +-pulseaudio---gconf-helper ¦ +-2*[{pulseaudio}] +-rsyslogd---2*[{rsyslogd}] +-rtkit-daemon---2*[{rtkit-daemon}] +-services.exe---plugplay.exe---2*[{plugplay.exe}] ¦ +-winedevice.exe---3*[{winedevice.exe}] ¦ +-3*[{services.exe}] +-smbd---smbd +-snmpd +-sshd +-timidity +-trashapplet +-udevd---2*[udevd] +-udisks-daemon---udisks-daemon ¦ +-{udisks-daemon} +-upowerd +-upstart-udev-br +-utorrent.exe---8*[winemenubuilder] ¦ +-{utorrent.exe} +-vnstatd +-winbindd---2*[winbindd] +-2*[winemenubuilder] +-wineserver +-wnck-applet +-wpa_supplicant +-xinetd System monitor and top screenshots which show the zombie process is using resources:

    Read the article

  • 503 service unavailable when debugging PHP script in Zend Studio

    - by user25932
    I have a web server with apache 2.0 installed. It comes with Zend Server install pack. When I’m trying to debug my php files apache serves a blank page with 503 service unavailable. Of course slow server-side code is tying up Apache requests for far too long, but I need it to wait, until my debugging comes to end. When I call to the page from a browser it launches ZendStudio debugging my PHP script (request redirects Zend Debugger module). I debug through my script and if I finish debugging in 120 seconds, I normally return to the browser. When it takes more than 120 seconds the browser displays '503 service unavailable' and I can't return to page output. I have even forced 'max_execution_time = 300' 'max_input_time = 600' in php.ini and 'TimeOut = 500' in httpd.conf. No matter whether it is Opera, IE or Firefox. I spent two days googling it, no right answer until now.

    Read the article

  • Script to list current user's mapped network drives

    - by Dmart
    I have a Windows XP/ Server 2003 environment here users have mapped different network drives themselves using arbitrary drive letters. Some of these users do not know how to tell the true UNC path of these drives, and I would like to be able to run a script or program to query those drives and show me the drive letters and the corresponding UNC paths. I would like to see output like "net use" in that user's context so that I can see what drives THEY have mapped. I would need to do this using my own admin account, which is where the difficulty lies. I understand this information would be stored in the HKCU registry? I would love to be able to do this in Powershell, but a vbscript or even a standalone executable would do. Thanks.

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • smtp sasl authentication failure

    - by cromestant
    hello, I have configured and fixed almost all the problems with my postfix +courier + mysql setup for virtual mailboxes. I can now receive mail and send it from webmail (squirrel). BUT, what I can't do is authenticate from outside client. Since my isp blocks port 25 I setup postfix to work on 1025 for smtp and setup verbose loging. Here is the verbose log of a failed authentication process LOG Authentication for imap and pop3 seem to be working but this one is not. Here is the postconf -n output. Also through mysql I can verify that it is trying to validate through the system, running a query that returns the encrypted password stored in the database. I can't seem to find the error for this. thank you in advance

    Read the article

  • Why kernel source is not installed

    - by Subhajit
    I want to install kernel source in Ubuntu 12.04 which is not installed. I have checked the same using the following command : dpkg -s kernel Output is Kernel is not installed no information available Hence I have followed the following steps to install the same: 1.Installed the Dependencies by the following command: sudo apt-get install gcc libncurses5-dev git-core kernel-package fakeroot build-essential sudo apt-get update && sudo apt-get upgrade Download kernel source by the following command: wget http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.tar.bz2 tar -xvf linux-3.5.tar.bz2 cd linux-3.5/ 3.Compiled the source code to generate the .deb packages by the following command make-kpkg clean fakeroot make-kpkg --initrd --append-to-version=-spica kernel_image kernel_headers Install the .deb packages (two .deb packages generated, one is to install the kernel headers, another one is to install the kernel image) by the following command sudo dpkg -i linux-*.deb But after reboot it seems kernel is not installed (checked by dpkg -s kernel). Please tell where I am going wrong. Also, In step 3, I guess I am installing a new kernel (named spica kernel_image) but during the boot up this new kernel is not showing as option. Please help me

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • reiserfsck --rebuild-tree failed: Not enough allocable blocks

    - by mojo
    I have a reiserfs volume that required a --rebuild-tree, but is currently failing to complete when I pass it --rebuild-tree. Here is the output that I receive when running it: reiserfsck 3.6.19 (2003 www.namesys.com) # reiserfsck --rebuild-tree started at Mon Oct 26 13:22:16 2009 # Pass 0: # Pass 0 The whole partition (7864320 blocks) is to be scanned Skipping 8450 blocks (super block, journal, bitmaps) 7855870 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 9408 /sec 287884 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 7855870 Leaves among those 6105606 Objectids found 287892 Pass 1 (will try to insert 6105606 leaves): # Pass 1 Looking for allocable blocks .. finished 0%....20%....40%....60%....80%....Not enough allocable blocks, checking bitmap...there are 1 allocable blocks, btw out of disk space Aborted I can't mount it, and I can't fsck it. I've tried extending the volume, but that hasn't helped either.

    Read the article

  • change directory automatically on ssh login

    - by Gareth
    Hi, I'm trying to get ssh to automatically change to a particular directory when I log in. I tried to get that behaviour working using the following directives in ~/.ssh/config: Host example.net LocalCommand "cd web" but whenever I log in, I see the following: /bin/bash: cd web: No such file or directory although though there is definitely a web folder in my home directory. Even using an absolute path gives the same message. To be clear, if I type cd web after logging in I get to the right folder. What am I missing here? EDIT: Different combinations of quotes/absolute paths give different error messages: LocalCommand "cd web" /bin/bash: cd web: No such file or directory LocalCommand cd web /bin/bash: line 0: cd: web: No such file or directory LocalCommand cd /home/gareth/web /bin/bash: line 0: cd: /home/gareth/web: Input/output error This makes me think that the quotes shouldn't be there, and that there's another error happening.

    Read the article

  • How to make a btrfs snapshot?

    - by MountainX
    My /home partition consists of an entire physical disk. It is formatted as btrfs. I want to snapshot it. I'm confused regarding subvolume naming, in particular. I am aware that there are similar questions, but each similar question seems to be asking something different from what I'm asking (and they are older, which means probably outdated, given the rapid development of btrfs). For example, the answer to this question is apparently not the answer to my question because my /home partition is a separate volume and the man page for btrfs shows a different command for creating snapshots now. another similar problem, no solid solution. someone else as confused as me on the naming issues My question: Starting simple: is this the correct command to take a simple snapshot of my home partition? btrfs subvolume snapshot /home/@home /home/@home_snapshot_20120421 I got really brave and tested it and it does not work. The error is error accessing /home/@home. As shown below, @home is listed. I'm obviously confused on subvolume names. Do I need to use them in creating snapshots? Some examples show taking snapshots of home using /home as the source parameter, but based on examples of root volumes, it seems to me that I need to use /home/@home. Would this command work? And if not, why? btrfs subvolume snapshot /home /home/@home_snapshot_20120421 Is the @ just a naming convention? Is it meaningful at all? Here's some output that may be relevant: btrfs subvolume list /home ID 256 top level 5 path @home I'm not sure what that means, exactly. When I try btrfs device scan it gives an error (e.g. unable to scan the device /dev/sda1). My file system doesn't have any errors. Everything is fine.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Webcast Replay : SANS Institute Product Review of Oracle Identity Manager

    - by B Shashikumar
    Thanks to everyone who attended the SANS Institute webinar covering the product review of Oracle Identity Manager. And a special thanks to our guest speakers from SuperValu - Phillip Black and Patrick Abreo. If you missed the webcast, you can catch a replay here  And here are the slides that were used in the webcast.  There were many questions that we could not answer as we ran out of time. We have captured some of the questions with responses below. Is Oracle Identity Analytics still offered as a separate product or is it part of Oracle Identity Manager? Oracle Identity Manager and Oracle Identity Analytics are now offered as part of Oracle Identity Governance Suite. OIA and OIM share a common UI architecture, common data model and common support for connected and disconnected resources.  When requesting new access/entitlements is there an approval process? Yes. We leverage SOA BPEL-based workflows for approvals  Are the identity self service capabilities based on Oracle ADF? Yes they are completely based on Oracle ADF  Can you give some examples of personalization and customization with Oracle Identity Manager 11gR2? With the new UI config framework we can enable different levels of UI customization. Customers now have the ability to Point & click to customize; or drag and drop customization without any need for coding. So users can easily personalize the interface of their application within the browser. For example, they can change the logo, Rearrange, hide Home Page regions; regularly searched items can be saved and re-used; Searchable & search results columns can be configured; Sorting preferences are remembered and so on. For more sophisticated customization, Customers can also edit the standard JSF within the page to alter business rules, modify page flows, page layouts and other items. Can you explain the role of sandboxes in customization? Customers can make their custom changes within a sandbox so that it doesn’t impact their production environment. They can make their changes, validate those changes, stage and then commit those changes without affecting production users. This is similar to how source code control systems like perforce work To watch a replay of the webcast, click here

    Read the article

  • How can I install oracle-java7 from webupd8 ppa?

    - by Ahmed Zain El Dein
    I installed ppa:webupd8team/java and I get the following error Output from: sudo apt-get install oracle-java7-installer Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: binfmt-support visualvm ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-kochi-gothic ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho ttf-arphic-uming The following packages will be upgraded: oracle-java7-installer 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. Need to get 0 B/16.0 kB of archives. After this operation, 64.5 kB of additional disk space will be used. Could not exec dpkg! E: Sub-process /usr/bin/dpkg returned an error code (100) i did afterwords those line of code trying to resolve the issue becuase it is not existed actually in the /usr/bin/dpkg there is no dpkg mkdir /tmp/dpkg cd /tmp/dpkg wget http://archive.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.15.5.6ubuntu4_i386.deb ar x dpkg*.deb data.tar.gz tar xfvz data.tar.gz ./usr/bin/dpkg sudo cp ./usr/bin/dpkg /usr/bin/ sudo apt-get update sudo apt-get install --reinstall dpkg then i get this $ sudo apt-get install --reinstall dpkg Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,814 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: warning: 'dpkg-deb' not found on PATH. dpkg: 1 expected program(s) not found on PATH. NB: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) How can I fix this?

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • Windows 7 x64 CD to MP3 ripper

    - by marc_s
    I'm trying to find a good, simple CD-to-MP3 ripper to copy my physical CD's to my computer's hard disk. I'm running Windows 7 Professional x64 - and that appears to be a major problem. All the "usual" free- and shareware tools I've tried (CD-to-MP3, Visual MP3 and quite a few more) seem to have trouble with either 64-bit Windows, or with my HP CD/DVD built into my HP Compaq Elite 8100 machine. Does anyone have any good recommendations? I don't want to install a monster like iTunes - something really clean, small, simple would be fine. Free- or Shareware - if it works reliably and with good quality output, I'll be happy to register! Any hints are welcome!

    Read the article

  • How to get which file is requested to open using a mac application?

    - by ramsey
    i have created an mac application which can be open my file extensions. But when i tested it, i dont get the path of the file requested to open using the application, instead i got the "psn_0_151589". I checked it for itunes, textedit, xcode and other applications. Below is my app sample main code where i process path of the opened file python code import sys import os.path print("File opened with this app :: ",sys.argv[1]) if(os.path.exists(sys.argv[1]): print("valid file :: { do something...}\n") else: print("Invalid file path received :: { do nothing }\n") OUTPUT : File opened with this app :: psn_0_151589 Invalid file path received :: { do nothing } Hope someone knows how to get the filepath which was opened using any application. Any help would be greatly appreciated. -ramsey

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Problems serving SVN over HTTPS on Ubuntu 10.04

    - by odd parity
    We've been experiencing some problems with our Subversion server after upgrading to Ubuntu 10.04. When trying to access a repository, regardless of client (I've tried git-svn and svn on Windows as well as svn on Ubuntu 10.04, from different computers and network locations), I get a 400 bad request. Here's the output from svn: svn: Server sent unexpected return value (400 Bad Request) in response to OPTIONS request for 'https://svn.example.org/svn/programs' Here are the relevant entries from the Apache logs (I'm running Apache 2.2): error.log [Mon Jun 14 11:29:31 2010] [error] [client x.x.x.x] request failed: error reading the headers ssl_access.log x.x.x.x - - [14/Jun/2010:11:29:28 +0200] "OPTIONS /svn/programs HTTP/1.1" 401 2643 "-" "SVN/1.6.6 (r40053) neon/0.29.0" x.x.x.x - - [14/Jun/2010:11:29:31 +0200] "ction-set/></D:options>OPTIONS /svn/programs HTTP/1.1" 400 644 "-" "SVN/1.6.6 (r40053) neon/0.29.0" If anyone has run into similar problems or could give me a pointer to track down the cause of this I'd be very grateful - I'd really like to avoid having to downgrade the box again.

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • DNS on Redhat - rdnc: no server specified and no default

    - by Syahmul Aziz
    Hi all. The error as shown in the 2 pictures below: The configurations for named.conf and the zones files as shown below: After applying "alveso" suggestion below. Now, I think there is no error but I still can't ping my own domain www.p0864868.com (10.0.0.1) nor can I do host or nslookup as shown on previous pictures. PLease assist. Thank you in advance. I also attached my the changes that I made to my named.conf as well as my resolve.conf configs as shown below: progress 2: turned on logging by typping "rndc queylog" The output as below when I pinged p0864868.com progress 3: changed permission of 10-0-0.zone and p086868.zone to 644 named:named Still can't ping www.p0864868.com or execute host command. It says something like network unreachable. I don't understand why it refer to I don't what address is that.

    Read the article

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • 14540059 - UPDATE FOR BI PUBLISHER ENTERPRISE 11.1.1.6.0 AUGUST

    - by Tim Dexter
    Its been a while, I know :( I have posts in the pipe just gotta smoke em out! The latest update for BIP 11.1.1.6 was released last week. A bunch of defects have been addressed as you can see below.  13473493 - XMLP TRANSLATION ISSUE OF MILLION (ENG) TO MILLIONES (SPANISH) 13521951 - BIP UPGRADE FROM 10G TO 11.1.1.5.0 IS NOT SUCCESSFULL FOR TIAA-CREF  12542914 - ACC: REPORT VIEWER STRUCTURE HAS ERRORS - NO IFRAME AND NO LANG ATTRIBUTE  13562801 - XML TAG DISPLAY SHOULD DEFAULT TO 'FOLLOW THE DATA 13568043 - BIP QUERY FAILING VALIDATION DUE TO 'COALESCE' KEYWORD 13592901 - THE REPORT IS THROWING AN SQL ERROR THAT REFERENCES CHECKING FOR NULL VALUES 13836696 - BI PUBLISHER REPORT NOT GENERATED WHEN A TEXT FIELD START WITH "E.<SPACE>"  13879206 - DM MIGRATION ISSUES 13888939 - DM: LOV SEARCH CAUSING DB CONNECTION LEAK 13904225 - XSLX ERROR DUE TO URL LINK AND USE OF LIST 13930795 - RTF TEMPLATE GIVING DIFFERENT RESULTS IN DIFFERENT  13942064 - XDOEXCEPTION THROWN WHEN RUNNING PEOPLESOFT TEMPLATES AND XML FILE 13981523 - BI PUBLISHER ON 64-BIT WINDOWS CAN'T CONNECT TO MS ANALYSIS SERVICES CUBE 14039229 - BIP 11.1.1.5.0 REPORTS ARE NOT WORKING ON BIP 11.1.1.6.0  14055793 - BIP 11.1.1.6.0: DATE TYPE INPUT PARAMTER IS NOT DISPLAYING THE CORRECT VALUE USI  14059851 - UNABLE TO GRANT PRIVILEGES TO ROLE: DOMAIN USERS; THE ROLE DOES NOT EXIST 14109967 - LARGE OUTPUT CAUSES OUT OF MEMORY DUE TO LEFT OVER DEBUG CODE 14163973 - ISSUES USING DATA MODEL EDITOR IN BIP 11.1.1.6  14167915 - ORG.XML.SAX.SAXEXCEPTION: DATE FORMAT CANNOT BE NULL  14240045 - EDITING SCHEDULED REPORTS DOES NOT REFLECT VALID VALUES FOR UPGRADED SCHEDULES 14304427 - SEARCH DIALOG NOT BINDING PARAMETER VALUE - INVALID PARAMETER BINDING(S). 14338158 - PASSWORD FIELD SHOULD NOT BE DISPLAYED FOR FMW SECURITY MODEL 14393825 - OBIEE11G: LARGE NUMBER OF OBIPS SESSIONS CREATED WHEN USING SSO AND BI PUB 14558377 - CONT. BUG 14240045:EDITING SCHEDULES IN BI PUBLISHER IS DEFAULTING TO 'ALL' This patch is just for BI Publisher standalone installs. For those of you using BIP within the wider BIEE suite there is the 11.1.1.6.2 BP1 patchset. More details on that here.

    Read the article

  • Cropping a PDF File's Margin During Printing

    - by JavaMan
    I'm using the free Acrobat Reader to print out some pdf documents having very large top/bottom/left/right margins. I want to remove the margins (which are wasting too much space and making the fonts too small). I used to use Acrobat (the paid version having edit features) to crop the src pdf file manually. But since it is an old version it does not support new pdf format and I don't want to upgrade for such a simple use. Is there any free way to crop/remove unwanted white margins from the printed pdf? I am thinking to print the pdf files to a PDF Printer like the Bullzip PDF Printer and enlarge the output file manually so as to remove any white margin. But there does not seem to be such a feature in Bullzip PDF Printer. Is there any other virtual printer software that can be used for this purpose?

    Read the article

  • Hypertransport sync flood error

    - by Carl B
    What is it? And what causes it? Is it only for uncorrectable DIMM Errors(Troubleshooting DIMM errors)? When an UCE occurs, the memory controller causes an immediate reboot of the system. During reboot, the BIOS checks the Machine Check registers and determines that the previous reboot was due to an UCE, then reports this in POST after the memtest stage: A Hypertransport Sync Flood occurred on last boot 3 BIOS reports this event in the service processor’s system event log (SEL) as shown in the sample IPMItool output There are what seems to be some suggested answers to include Bad Caps Bios verisons (happens in one version not the other) Graphics card issues Lack of power to the CPU The list of possible generators seems to target everything but the computer case. System Specs: Windows Home Premium 64 Motherboard - MSI790FX-GD70 (MS7577) / Bios v 1.9 (American Megatrensa Inc) Ram - Patriot G Series ‘Sector 5’ Edition 4GB DDR3 1600 CPU - AMD Phenom II X2 555 Black Edition Callisto 3.2GHz Socket AM3 80W (Note: unlocked 2 cores CPU Z ids it as phenom II x4 B55) Graphics - 2 x Radeon 5750 in crossfire PSU - ABS 900w HDDs - 2 Seagate 1.5 TB Sata SSD - 1 OCZ 120 GB Vertex Plus R2

    Read the article

< Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >