Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 812/892 | < Previous Page | 808 809 810 811 812 813 814 815 816 817 818 819  | Next Page >

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • Can't validate mine, sudo nor root in Debian "Jessie" Gnome anymore?

    - by Janar
    I'm Debian beginner & GUI guy in a bit of trouble? Can't login as sudo/gksu/root/su nor as (main/super)user after removed user password via Gnome-user-settings. History of actions (Probably irrelevant though) Installed Debian "Jessie" GNU/Linux with xFce GUI (en-US) as only OS. HardWare is ThinkPad w510. Skipped root user password in setup, to get sudo for superuser easily. Logged in (as always had) with Gnome (3.4.x), not once with xFCE. (installed Xfce. Installed xFce only to achieve more control (easier management) over packages this way, to set-up gnome much more by mine likes. Added more jessie repros (same ones as in Wheesy stable by default but for Jessie as, Jessie only had repros for security updates by default). Installed lots of gtk(3) & gnome(3) based soft; (- restarted again after this) Installed propietary graphics card driver for mine nvidia quadro. (- restarted once again after that one) Installed more stuff related to mine work/school/devel. The actual problem Had a plan to restart again, but wanted to set up auto-login first, instead set user password to none (don't ask why / perhaps caused by being awake for a looooong time), noticed it, and set also to auto-login, but couldn't undo mine previous mistake to create new password for me. As mine password is set to none I would have expected that simply return in pass prompt for emty password field would do, but it won't authenticate. I tried Alt+F2 "gksu gedit" as well as: sudo wget "https://www.some-page.eu/file.ext" and "su" in terminals, none has applied (quite logical actually - as I'm sudoer and highest ranked super user, besides only user in computer). Current stand Everything worked & still works nice after this accident, besides this password prompts part. To spoked to log-out nor restart. Synaptic package-manager is still open with root rights (only one, that has left open prior to the issue and not closed since, just in case). Goggled for help and read some manuals/faqs/how-tos - mostly lead to sudoers file management, but not found one specifically for mine issue - so still not any smarter. Really hope, that I don't have to redo OS inst all over again, by just one stupid mistake. Thanks for your reply :-)

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • Why Does My Windows 7 PC Freeze After Waking from Sleep?

    - by Blaenk
    Hey guys. I have Windows 7 running on my computer and everything is perfect. There's only one little problem. Sometimes when I leave my computer, I come back and my monitor is turned off or asleep, whichever it is. This is fine, I set it to do this. However, after turning on the monitor and moving the mouse around, the mouse cursor freezes; both the keyboard and mouse don't respond to anything, for example the keyboard's windows key won't bring up the start menu and moving the mouse around does not move the cursor around on the screen. I have to wait about a minute or two before things start working again. I figured this was a power savings setting problem, so I went into Control Panel Power Options. I only have Turn off Display = 30 minutes and Put Computer to Sleep = Never. Of course, I went into the advanced power settings to look through there. I put Never to turn off the hard disk, Sleep after never, and that's about it. Nothing else there looks like it might be causing this. I went into the device manager and checked the mouse and the keyboard, and they both have the Allow this device to wake the computer checked for both of them. Perhaps this other bit of information might help: Sometimes I VNC into my PC using my MacBook, and sometimes, as soon as it shows me the desktop, the same thing happens. The mouse won't move and VNC won't register any events on the server (Which is my PC of course). I close the client (And I know it has nothing to do with the client), then immediately restart it and try to connect. When I click the connect button, it hangs there, as if the PC is not responding. Basically it's like whenever I try to wake the computer from sleep, it does so by showing me the desktop, then it freaks out. Then again, I guess the computer isn't sleeping because the setting is set to 'Sleep after = Never'. I honestly don't know what's going on, would appreciate any insight. Thanks!

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

  • Windows7 issue in mutli- tasking and memory

    - by Nitesh
    I seeming some problem in my windows OS recently, let me first say my system configuration. processor - Intel(R) Core(TM)2 Quad CPU Q8400 @ 2.66 GHz Installed memory (RAM) - 4.00 GB (3.00 GB usable) System type - 32 bit operating system I am using two OS in this system, first one is Windows7 and the other is centOS. Well, I am using this from a long time there was no problem , and all of a sudden since from couple weeks I am facing problems in my Windows7 OS. In windows7 i was nearing using multiple jobs almost every time i log in, there was no problem but now i don't no what happen I am not able to do multiple jobs at same time. For example- 1 I am now not able to listen to music in windows media player and view photo's. All of a sudden the system stops working and does not respond and then respond after 5mins and the music get played where it got stopped after 5 mins. 2 When i start browersing internet it hangs all of sudden and doesn't respond for 2 or 3 mins and gets loading. I mean it just happens for every operation i do in the system. Even now typing was also difficult, it gets hanged very frequently even though i am doing single task. I have never come across this kind of problem before. So the first thing i did was to see the useage of the processor and the memory. Well, i thick the useage of the processor was fine, for single task the useage was some where around 3 to 5%. Well, it was something weird i found in the memory, in spite of no task that i was running it was using somewhere around 34 to 41% of memory. So i opened the task manager and click on resource monitor in performance tab. And in the memory section of the monitoring tool i found the usage of my RAM, it was something like this. Hardware reserved - 1029 MB In Use - 1430 MB Modified - 49 MB Standby - 1566 MB free - 22 MB And i could also see Available 1588 MB Cached 1615 MB Total 3067 MB Installed 4096 MB well, this if all i could find out and i have no idea why my computer is acting so weird all of a sudden and the performance problem is growing day by day and i also don't know if there is problem in Bios, i have let it for default settings from long time. please help me and Thank you in advance for reading this and helping me.

    Read the article

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Init script & the green [ OK ]

    - by Lord Loh.
    I am trying to install fast-cgi for nginx on an EC2 instance. I followed the steps explained here, but that is meant for Debian and does not work out of the box for a red-hat based system. I modified the script a bit to look like - #!/bin/bash ### BEGIN INIT INFO # Provides: php-fcgi # Required-Start: $nginx # Required-Stop: $nginx # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts php over fcgi # Description: starts php over fcgi ### END INIT INFO . /etc/rc.d/init.d/functions (( EUID )) && echo .You need to have root priviliges.. && exit 1 BIND=/tmp/php.socket USER=nginx PHP_FCGI_CHILDREN=15 PHP_FCGI_MAX_REQUESTS=1000 PHP_CGI=/usr/bin/php-cgi PHP_CGI_NAME=`basename $PHP_CGI` PHP_CGI_ARGS="- USER=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND" RETVAL=0 start() { echo -n "Starting PHP FastCGI: " #ORIGINAL LINE #daemon $PHP_CGI --quiet --start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS #MODIFIED LINE daemon --user=$USER $PHP_CGI -b $BIND& RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/php-fcgi #echo "$PHP_CGI_NAME." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $USER $PHP_CGI RETVAL=$? echo "$PHP_CGI_NAME." rm /var/lock/subsys/php-fcgi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL The problem I have now is - service php-fcgi start keeps the shell blocked. If I run service php-fcgi start & and then ps aux, I see the php-cgi process running bound to the socket. I see the start command stop only when I execute service php-fcgi stop. How do I solve this blocking issue? I have tried adding an & at the end of the line spawning the daemon. But other scripts do not seem to be doing this. This is the most complicated script I am attempting to modify yet :-( How do I get the script to display the green [ OK ]? I checked scripts like httpd and saw that all they were doing was something as shown below. But I never see a green [ OK ] when I execute php-fcgi. I also discovered that putting echo_success with functions sourced displays the green [ OK ] but I do not see any other scripts in the /etc/rc.d/init.d/ executing echo_success or echo_failure. What have I got wrong? Also, How do i specify PHP_FCGI_CHILDREN with daemon? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/

    Read the article

  • Computer suddenly dies; screen displays weird flickering lines, then restarts

    - by Imray
    I've been having this terrible problem for a little while and just managed to get a picture of 'dead screen' for the first time and I am posting it to seek help. Randomly, at irregular intervals (typically once a week), while working on something (it's been different things every time) my computer will just suddenly go dead - the screen turns to exactly the picture below (the lines flicker a little bit), it hangs there for a few seconds and then restarts. Obviously this is extremely frustrating and I want to try to stop it. I've searched numerous postings with similar keywords but nothing exactly the same as mine. Does anyone have any idea what might be the cause of this? I would post all my system settings and installed programs but the list is long and I don't know how much relevance each item would be. If you'd like to know something specific, please comment and I'll let you know whatever you need. SPECS C:\Users\Imray>systeminfo Host Name: Imray OS Name: Microsoft Windows 7 Professional OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Workstation OS Build Type: Multiprocessor Free Registered Owner: Imray - Owner Registered Organization: Product ID: 00371-152-9333854-85895 Original Install Date: 06/09/1999, 5:45:21 PM System Boot Time: 22/03/2013, 8:58:18 AM System Manufacturer: Gateway System Model: DX4840 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 37 Stepping 2 GenuineIntel ~3201 Mhz BIOS Version: American Megatrends Inc. P01-A3 , 17/05/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume2 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 6,135 MB Available Physical Memory: 3,632 MB Virtual Memory: Max Size: 12,268 MB Virtual Memory: Available: 8,114 MB Virtual Memory: In Use: 4,154 MB Page File Location(s): C:\pagefile.sys Domain: WORKGROUP Logon Server: \\Imray-OWNER Hotfix(s): 4 Hotfix(s) Installed. [01]: KB971033 [02]: KB958559 [03]: KB977206 [04]: KB981889 Network Card(s): 2 NIC(s) Installed. [01]: 802.11n Wireless PCI Express Card LAN Adapter Connection Name: Wireless Network Connection DHCP Enabled: Yes DHCP Server: 192.168.2.1 IP address(es) [01]: 192.168.2.13 [02]: fe80::1df1:5399:6890:91f6 [02]: Microsoft Virtual WiFi Miniport Adapter Connection Name: Wireless Network Connection 2 DHCP Enabled: Yes DHCP Server: N/A IP address(es) Graphics Card Specs Name ATI Radeon HD 5570 PNP Device ID PCI\VEN_1002&DEV_68D9&SUBSYS_E142174B&REV_00\4&18A4B35E&0&0008 Adapter Type ATI display adapter (0x68D9), ATI Technologies Inc. compatible Adapter Description ATI Radeon HD 5570 Adapter RAM 1.00 GB (1,073,741,824 bytes) Installed Drivers atiu9p64 aticfx64 aticfx64 atiu9pag aticfx32 aticfx32 atiumd64 atidxx64 atidxx64 atiumdag atidxx32 atidxx32 atiumdva atiumd6a atitmm64 Driver Version 8.700.0.0 INF File oem1.inf (ati2mtag_Evergreen section) Color Planes Not Available Color Table Entries 4294967296 Resolution 1920 x 1080 x 59 hertz Bits/Pixel 32 Memory Address 0xD0000000-0xDFFFFFFF Memory Address 0xFBDE0000-0xFBDFFFFF I/O Port 0x0000D000-0x0000DFFF IRQ Channel IRQ 4294967293 I/O Port 0x000003B0-0x000003BB I/O Port 0x000003C0-0x000003DF Memory Address 0xA0000-0xBFFFF Driver c:\windows\system32\drivers\atikmpag.sys (8.14.1.6095, 181.00 KB (185,344 bytes), 06/09/1999 5:59 PM)

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Unable to boot from LiveCD/USB and even Super Grub Disk!

    - by Reuben L.
    Hi all, I'm in a fix. Basically this morning, I decided to format my Win7 as it was getting really slow and I did so with no problems. I also have a Linux Mint OS on dual boot. Since I was springcleaning my windows partition, I decided it was a good idea to do the same to my linux partition. I downloaded the latest version of Linux Mint (Julia) and burned the LiveCD. Now here is where the problem lies, when I restarted Windows and chose to boot from the LiveCD, it didn't work. No joke. There was just a little underscore blinking for a long time before it went back to GRUB which prompted me to select an OS to boot. However, when I went into my old Linux Mint OS and restarted the machine, the LiveCD worked... to a certain extent. It would load and look as though it was ready to install Linux Mint 10 but the moment it got to the option screen, the whole screen turned into a checkered and jumbled mess. At this point I thought it was the LiveCD or the .iso file. I had an Ubuntu LiveUSB for recovery purposes and I tried that. The exact same thing happened. Can't boot the LiveUSB if I restarted from Windows, but works when I reboot from Linux. BUT still the same checkered screen that doesnt respond. Did a bit of googling and reckoned it might be something wrong with my GRUB. Did some updating and didnt make a difference. Then I tried the Super Grub Disk and STUPIDLY uninstalled GRUB. (Note that booting to SGD had the exact same problem - can't be done if I rebooted from Windows). Now I can't access my Linux Mint 9 cos the the bootup screen (mbr) only has Windows 7 as an option. Remember me mentioning that I can't boot from any CD/USB/recovery CD when I reboot from Windows? And now that I can't access Linux, there's no way for me to do any form of recovery! I've tried using the command prompt utility at startup recovery but to no avail. Anyone can help me with this?

    Read the article

  • Graphics card artifacting

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 @ stock clocks 3x 2GB DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) My graphics card started artifacting while I was playing a game. It artifacted, the display blinked, then I got an NVIDIA driver has crashed and recovered message. Kept going, more artifacts, another crash, but this time my display blanked out and I couldn't do anything. Restarted my computer - artifacting is in the BIOS - got to Windows 7 but it BSOD'd before I could even log in. I restarted the computer again - artifacts cleared themselves out and I managed to get to Win7, but it soon started blinking in and out and artifacting again. Checked the card temps and they're well within range. (50 idle, 70 full load) The ambient temperature here is about 80-85F with high humidity. Tried Safe Mode and it still froze up/BSOD'd. Already tried the following to fix this problem: - Reseat the graphics card and swapped in a different slot. - Removed cover on card and sprayed with compressed air to clean it out. - Swapped around memory and/or went with only using one stick at a time. - Underclocked card I called EVGA Tech Support and they said that the voltage on the 12V rail of my Corsair AX750 PSU was on the high end of the "acceptable" range (12.4V, highest within acceptable range is 12.6V - optimal is obviously 12V). They gave me an RMA number anyway, but I want to get a second opinion from you all before I send this thing off, as shipping from where I live to EVGA is kind of pricey. This PSU is only 6 months old. So that I don't have to play RMA tag, which case would be most likely? I'm strapped for cash at the moment so I want to reduce the amount of RMAs I have to do since shipping is expensive here. Is there any surefire way to test to see if it's really the graphics card or the PSU? I tried unplugging any devices that were connected to the 12V rail (except for my SSD and graphics card) as I do have 3 mechanical hard drives, but the voltage for that rail didn't drop (it remained at 12.4V). I'm fairly sure it isn't the drivers since I'm getting the artifacting at the BIOS too. Right now I'm back in Windows 7 but I don't know for how long until it messes up again. Any ideas?

    Read the article

  • What are the most likely bottlenecks determining the performance of CamStudio screen recording?

    - by Steve314
    When doing screen recording, I can get a frame rate of maybe 15 frames per second for the full screen on my 1080p monitor using the XVID codec. I can increase the speed a bit by recording a region, changing screen modes, and tweaking other settings, but I'm curious what hardware upgrades might give me the biggest bang for my buck. My PC is budget, but modern... Athlon 2 X4 645 (3.1GHz, quad core, limited cache) processor. 4GB single channel DDR3 1066 RAM. ASRock motherboard with NVidia GeForce 7025/nForce 630a Chipset. ATI Radeon HD 5450 graphics card - 512MB on board, not configured to steal system RAM. I dual-boot Windows XP and Windows 7. For the moment, XP is my bigger performance concern as it's still my getting-things-done O/S as opposed to my browser-host O/S. My goal is to make a few programming-related tutorials. For a lot of that I don't need screen recording - I can make up some slides, record audio with the PC switched off, yada yada. When I do need screen recording, I'll mostly be recording Notepad++, Visual Studio or a command prompt. Occasionally, I may be recording some kind of graphics or diagram program and using my pre-Bamboo cheap Wacom tablet - I have the CS2 versions of Photoshop and Illustrator, but I'd much more likely be using Microsoft Paint. Basically, what I'll be recording won't be making huge demands on the machine - but recording a fair number of pixels (720p preferred) will be useful. What's particularly wierd - not so long ago I still had a five-year-old Pentium 4 based PC. And (with the same 1080p monitor) it could record at not far from the same frame rate. So clearly the performance issues are more subtle than just throw-money-at-it. My first guess would be that the main bottleneck is the bandwidth for transferring data to/from the graphics card. Is that likely to be correct? In support of that, see this [Radeon HD 5450 review][1] - the memory bandwidth is only 12.8 GB/s. If you can't get data out of graphics memory quickly, you can't transfer it back to the system memory quickly. Apparently, that's slower than some top-end cards in 2002.

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Looking for personal scheduling software / todo list with rather particular requirements

    - by Cthulhu
    I've been scouring the web for a couple of (my boss') hours, looking for a piece of software that can organize my tasks in two ways. First, I have a list of bullet points / todo items I can do at any given time. Think of stuff like solve issue X, ask X about Y, write documentation about Z, etcetera. Second, I have a number of running projects I'd like to organize better, as in schedule for a certain part of a day of the week. Ideally (I think), my day would be organized as 50% spent on projects and 50% on the other small things. Now, I don't like most calendar applications (such as Outlook & friends), their UI is too 'official', not really easy to move stuff around (in my experience). I don't like most todo lists either, too static and things. I like new, fast and hip software. I've looked at GTD versions of Tiddlywiki, and I like mGSD for one particular feature. You can make lists of tasks and basically give them one of three statusses - Now (nothing required, you can do it right away), Waiting (you need someone or something before you can work on this), or the most gratifying of all, Done. I like that feature because it's a simple todo list, but indicates more accurately the things you can do right now and the things you depend on someone else for to do. Anyways, that's just a small aspect of that program - most of the other things in there I can't find a particularly good use for. If there's something like that (maybe something that works even snappier, cleaner UI), combined with an easy to use bit of scheduling software (optionally separated into two applications, but preferrably not), I think I'd like that. (Besides something like that, I also use several instances of Trac to monitor tasks and bugs and things for the various clients and projects I have to serve, and TaskCoach to monitor the amount of time I spend on each task / each client. An easy / low-maintenance time tracking software would be neat too) Of course, the software has to be free to use. I don't like shareware, trials, limited software and the like. I could develop my own too, but I'm lazy like that and there's a dozen other projects I'd like to do in my free time (neither of which I actually do). Edit: I like David Seah's printable CEO stuff, if something like that (with some video game / instant achievement / gratification) exists in software, it'd be awesome.

    Read the article

  • Problems with MGCP proxy creation

    - by Popof
    Hi, I'm trying to bypass my ISP router with my FreeBSD server (I've an optical connection so I've a RJ45 used to connect the box to WAN) Internet and TV are working fine (Using igmpproxy to forward TV stream) but I've a problem with phone. ISP's box is connected to the server which gives it a LAN address. The problem is that when the box builds MGCP packets (and especially SDP ones) it uses its LAN address. So I've think of writing an UDP proxy to handle MGCP and SDP packets in order to replace LAN address with server WAN address and then forward packet to WAN. Before starting coding I've captured stream packets using my server as a bridge between WAN connection and the ISP's box. And, in order to see if my solution is viable, I've tried to send those packets to the box using nemesis. I tried to send a packet (found in capture) containing an endpoint audit: AUEP 1447 aaln/[email protected] MGCP 1.0 F: A In the wireshark capture the box replied: 200 1447 OK A: a:PCMU;PCMA;G726-16;G726-24;G726-32;G726-40;G.723.1-5.3;G.723.1-6.3;G729;TELEPHONE-EVENT, fmtp:"TELEPHONE-EVENT 0-15,144,149,159", p:10-30, b:4-40, e:on, t:00, s:on, v:L;M;G;D, m:sendonly;recvonly;sendrecv;inactive;confrnce;replcate;netwtest;netwloop, dq-gi But when I use nemesis, I got an ICMP error: Port unreachable (Type 3, Code 3). To build this packet, WAN source address of the capture is replaced with my server LAN address, using the mgcp-callagent port (2727) and the packet is sent to the LAN address of the box at mgcp-gateway port (2427). The command I use is nemesis udp -S 192.168.2.1 -D 192.168.2.2 -x 2727 -y 2427 -P packet_to_send. I also tried an UDP scan to the box on callagent and gateway port: PORT STATE SERVICE 2727/udp open|filtered unknown 2427/udp closed unknown I found those results a little bit strange because it should be the 2427 port opened, as it was in capture. Internet Protocol, Src: <ISP MGCP Server>, Dst: <My WAN Address> User Datagram Protocol, Src Port: mgcp-callagent (2727), Dst Port: mgcp-gateway (2427) Does someone has any idea about how having my box responding to my requests ? Thanks in advance and sorry for my english.

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

< Previous Page | 808 809 810 811 812 813 814 815 816 817 818 819  | Next Page >