Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 437/815 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Middle Mouse Button does not work in XFCE / Arch Linux

    - by Alp
    I have the XFCE desktop manager installed on my Arch Linux system. With E17 (Enlightenment desktop manager) i had no problems with my mouse: all buttons worked correctly out of the box. But in XFCE my middle mouse button does not fire an event at all (no output with xev). Evdev seems to identify my mouse correctly (Razer Deathadder) because it echoes its name in the xorg logs. I have no idea what could cause this and how to debug the problem. I start both e17 and xfce with startx. Here is my ~/.xinitrc: exec startxfce4 --with-ck-launch #exec enlightenment-start

    Read the article

  • Read only file system

    - by Jack Moon
    I'm running Ubuntu 12.10, Upon opening any shell I get the following error: /home/jack/.rbenv/libexec/rbenv-init: line 87: cannot create temp file for here-document: Read-only file system I realised this wasn't simply a rbenv issue, as any file I try to write to returns an error saying the system is Read-only. I don't know how else to describe my problem, each time I boot up the system goes through a disk check, where it supposedly fixes several errors in my disk. Here is my /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1cc4b2ab-a984-4516-ac25-6d64f5050244 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=4e0dfeae-701a-43ce-b5c6-65f15ab3d8e3 none swap sw 0 0 The entire file system is read-only. I've tried the following sudo fsck.ext4 -f /dev/sda1 which gave the following (shortened) output /dev/sda1: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda1: ***** REBOOT LINUX ***** /dev/sda1: 1257080/45268992 files (1.0% non-contiguous), 50696803/181051904 blocks

    Read the article

  • iptables rules keep showing up

    - by Omriko
    I just installed an ubuntu precise server, after a few weird communications issues I checked the iptables list and found: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- 10.0.0.0/24 anywhere tcp spts:1024:65535 dpt:ssh state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW DROP tcp -- anywhere anywhere tcp dpt:10520 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:31337 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:31338 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:54320 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:54321 state NEW DROP tcp -- anywhere anywhere tcp dpt:12345 state NEW DROP tcp -- anywhere anywhere tcp dpt:12346 state NEW DROP tcp -- anywhere anywhere tcp dpt:20034 state NEW DROP tcp -- anywhere anywhere tcp dpt:16600 state NEW DROP tcp -- anywhere anywhere tcp dpt:16660 state NEW DROP tcp -- anywhere anywhere tcp dpt:65000 state NEW DROP udp -- anywhere anywhere udp dpt:34555 state NEW DROP udp -- anywhere anywhere udp dpt:35555 state NEW DROP udp -- anywhere anywhere udp spts:netbios-ns:netbios-dgm dpts:netbios-ns:netbios-dgm state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:netbios-ssn state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:microsoft-ds state NEW DROP udp -- anywhere anywhere udp spt:microsoft-ds dpt:microsoft-ds state NEW DROP udp -- anywhere anywhere udp spts:1024:65535 dpt:microsoft-ds state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:loc-srv state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:5000 state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpts:1025:1029 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:loc-srv state NEW ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpt:28082 state NEW DROP all -- anywhere anywhere state NEW Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spts:tcpmux:65535 dpts:tcpmux:65535 state NEW ACCEPT udp -- anywhere anywhere udp dpts:1:65535 state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpt:28082 state NEW DROP all -- anywhere anywhere state NEW I tried to wipe the rules, I disabled UFW, Ive rewritten and saved iptables rules according to this guide, but every minute or so the old rules return.... I checked crontab for scheduled tasks, there is nothing in there but still these rules appear every minute... please help!

    Read the article

  • PXE boot very slow when PXE server is virtualbox

    - by sqrtsben
    As I read in questions here and on the Internet, PXE and Virtualbox don't seem to like each other too much. My problem is the following: I have a virtualized machine which hosts the DHCP and PXE server for 10 native clients. They are rebooted roughly every 10 mins and on each reboot, they need to boot a small linux (the initrd is ~4MB). Before, I had a native machine running and booting via PXE was very fast. Now, looking at the output of nload, I only get 500kbit/s whenever one machine is booting. The machines are connected via a GBit switch, so that can't be it. Also, when testing the connection speed to the outside, I have the full bandwidth available. Is VBox just unable to deal with large amounts of UDP packets? Can anyone point me in the right direction here?

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Reading from a staging 2D texture array in DirectX10

    - by Don Reba
    I have a DX10 program, where I create an array of 3 16x16 textures, then map, read, and unmap each subresource in turn. I use a single mip level, set resource usage to staging and CPU access to read. Now, here is the problem: Subresource 0 contains 1024 bytes, pitch 64, as expected. Subresource 1 contains 512 bytes, pitch 64. Subresource 2 contains 256 bytes, pitch 64. I expect all three to be the same size. Debugging output is enabled, but not reporting any warnings or errors. Am I missing something, or might this be some sort of driver issue? Here is the code. The language is Nemerle, but C# and C++ would look almost the same. I have looked through the generated code, and am fairly confident the problem is not language-related. def cpuTexture = Texture2D ( device , Texture2DDescription() <- { Width = 16; Height = 16; MipLevels = 1; ArraySize = 3; Format = Format.R32_Float; Usage = ResourceUsage.Staging; CpuAccessFlags = CpuAccessFlags.Read; SampleDescription = SampleDescription(count = 1, quality = 0); } ); foreach (subresource in [0 .. 2]) { def data = cpuTexture.Map(subresource, MapMode.Read, MapFlags.None); Console.WriteLine($"subresource $subresource"); Console.WriteLine($"length = $(data.Data.Length)"); Console.WriteLine($"pitch = $(data.Pitch)"); cpuTexture.Unmap(subresource); }

    Read the article

  • Why do my speakers get distorted randomly on Windows 7?

    - by Daniel Fischer
    I have a studio monitor setup. I have 2 KRK 6's and a Focusrite Firewire Pro 24. Every few hours my speakers sound distorted and my solution has been go to sound levels Properties of Saffire Audio Device Advanced Default Format Toggle to 16 bit then back to 24bit. Why does it screw up every few hours? Sometimes one speaker doesn't output too and this same process resets it but that's more rare. Is this a OS issue or Focusrite Driver Issue?

    Read the article

  • Create new partition on live production CentOS server

    - by Kimmel
    I have a production server that is running on CentOS. I'd like to create a partition on the server without having to reinstall everything. I have CLI and VNC access to the remote server. Is there anyway that I can create a partition safely? Here's my output from fdisk -l Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033d5e Device Boot Start End Blocks Id System /dev/sda1 * 1 10444 83885056 83 Linux Thanks.

    Read the article

  • debian squeeze: where do the logs for sysv init scripts go? (why won't my init script work)

    - by sbeam
    my actual problem is trying to debug a init script to start Resque. It works fine run as root from the command line, but does nothing on boot. It has some proper insserv headers and I've run updaterc.d to create the symlinks, and checked that they exist. The script is +x. # find /etc/rc*.d -name \*resque\* /etc/rc0.d/K01resque /etc/rc1.d/K01resque /etc/rc2.d/S01resque /etc/rc3.d/S01resque /etc/rc4.d/S01resque /etc/rc5.d/S01resque /etc/rc6.d/K01resque # ls -l /etc/init.d/resque -rwxr-xr-x 1 root root 2093 Oct 24 03:02 /etc/init.d/resque the script can be viewed here if you like. It uses lsb functions to log messages, which essentially echo() to STDOUT I believe. So where does the output go during startup? It's not in /var/log/*log

    Read the article

  • How to SSH to guest ubuntu OS in vmplayer4

    - by Grace
    I have installed vmplayer4.0.4 on Windows7, and install ubuntu12.04 as Guest OS. Basically i have two problems: Default vmplayer use NAT for network access. I could ping the guest OS from the Host OS. But how could i access the Guest OS from outside the Host OS? If i change to Bridged Mode, sure the Guest Ubuntu OS could get DHCP ip in the same subnet as Host OS. But i could not ping the Guest OS from the Host OS, or vice versa, even if i disable the iptables firewall on Ubuntu Guest OS like following: iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT I could not figure it out, could anyone help on this issue? Thanks in advance.

    Read the article

  • Search files for text matching format of a Unix directory

    - by BrandonKowalski
    I am attempting to search through all the files in a directory for text matching the pattern of any arbitrary directory. The output of this I hope to use to make a list of all directories referenced in the files (this part I think I can figure out on my own). I have looked at various regex resources and made my own expression that seems to work in the browser based tool but not with grep in the command line. /\w+[(/\w+)]+ My understanding so far is the above expression will look for the beginning / of a directory then look for an indeterminate number of characters before looking for a repeating block of the same thing. Any guidance would be greatly appreciated.

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • samba not starting on ubuntu

    - by Mirage
    I have this output user123@Matrix-Server:~$ /etc/init.d/samba stop bash: /etc/init.d/samba: No such file or directory sputnik@Matrix-Server:~$ sudo /etc/init.d/samba restart sudo: /etc/init.d/samba: command not found user123@Matrix-Server:~$ user123@Matrix-Server:~$ sudo apt-get install samba smbfs Reading package lists... Done Building dependency tree Reading state information... Done samba is already the newest version. smbfs is already the newest version. The following packages were automatically installed and are no longer required: linux-headers-2.6.32-19-generic linux-headers-2.6.32-19 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • Throughput tool with decent graphing.

    - by Cory J
    I've been looking through some of the tools available for measuring network throughput, namely iperf, bwping, ttcp, etc. I am planning on doing throughput tests over a long period of time, so what I really need is good graphing output, preferably rrd graphs. The Jperf frontend for iperf will generate a graph, and bmon has a nice command-line graph, but these simply count seconds since the test was started. I am trying to measure trends in throughput over times of the day, so a graph with times and days is necessary. So a way to get iperf to log to RRDs would be best, if this isn't possible could someone point me toward another solution?

    Read the article

  • Permissions problem on mounting afp drive

    - by Ron Gejman
    I am trying to mount a network drive via AFP on an Ubuntu 10.04 server machine. After installing AFP support, I use the following command: sudo mount_afp afp://USER:[email protected]/directory/ /media/dir This seems to work and it tells me that mounting succeeded. However, when I navigate to /media/dir I get the following error: cd: cfs: Input/output error Permissions in /media are: d????????? ? ? ? ? dir/ drwx------ 12 user 4.0K 2010-10-25 16:08 otherdisk/ So there is a permissions problem here. I eventually want to mount this drive automatically using fstab. What do I need to do to make the disk accessible?

    Read the article

  • Is 1GB RAM with integrated graphics sufficient for Unity 3D on 12.04?

    - by Anwar Shah
    I have been using Ubuntu since Hardy Heron (8.04). I used Natty, Oneiric with Unity. But When I recently (more than 1 month now) upgraded My Ubuntu to Precise (12.04), the performance of my laptop is not satisfactory. It is too unresponsive compared to older releases. For example, the Unity in 12.04 is very unresponsive. Sometimes, it requires 2 seconds to show up the dash (which was not the case with Natty, though people always saying that Natty's version of Unity is buggiest). I am assuming that, May be my 1GB RAM now becomes too low to run Unity of Precise. But I also think, Since Unity is improved in Precise, It may not be the case. So, I am not sure. Do you have any ideas? Will upgrading RAM fix it? How much I need if upgrade is required? Laptop model: "Lenovo 3000 Y410" Graphic : "Intel GMA X3100" on Intel 965GM Chipset. RAM/Memory : "1 GB DDR2" (1 slot empty). Swap space : 1.1GB Resolution: 1280x800 widescreen Shared RAM for Graphics: 256 MB as below output suggests $ dmesg | grep AGP [ 0.825548] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000

    Read the article

  • How do I get vmbuilder to progress?

    - by Avery Chan
    I've used the following command to create my vm: vmbuilder kvm ubuntu --verbose --suite=precise --flavour=virtual --arch=amd64 -o --libvirt=qemu:///system --tmpfs=- --ip=192.168.2.1 --part=/home/shared/vm1/vmbuilder.partition --templates=/home/shared/vm1/templates --user=vadmin --name=VM-Administrator --pass=vpass --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/home/shared/vm1/boot.sh --mem=256 --hostname=chameleon --bridge=br0 I've been trying to follow the direction here. My system just outputs this and it hangs at the last line: 2012-06-26 18:08:29,225 INFO : Mounting tmpfs under /tmp/tmpJbf1dZtmpfs 2012-06-26 18:08:29,234 INFO : Calling hook: preflight_check 2012-06-26 18:08:29,243 INFO : Calling hook: set_defaults 2012-06-26 18:08:29,244 INFO : Calling hook: bootstrap How can I get vmbuilder to continue the process instead of dying right here? I'm running 12.04. EDIT: Adding some additional output details When I ^C to get out of the hang I see this: ^C2012-06-26 18:19:29,622 INFO : Unmounting tmpfs from /tmp/tmpJbf1dZtmpfs Traceback (most recent call last): File "/usr/bin/vmbuilder", line 24, in <module> cli.main() File "/usr/lib/python2.7/dist-packages/VMBuilder/contrib/cli.py", line 216, in main distro.build_chroot() File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 83, in build_chroot self.call_hooks('bootstrap') File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 67, in call_hooks call_hooks(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 165, in call_hooks getattr(context, func, log_no_such_method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 136, in bootstrap self.suite.debootstrap() File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/dapper.py", line 269, in debootstrap run_cmd(*cmd, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 113, in run_cmd fds = select.select([x.file for x in [mystdout, mystderr] if not x.closed], [], [])[0]

    Read the article

  • C#/.NET Little Wonders: The Predicate, Comparison, and Converter Generic Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last three weeks, we examined the Action family of delegates (and delegates in general), the Func family of delegates, and the EventHandler family of delegates and how they can be used to support generic, reusable algorithms and classes. This week I will be completing my series on the generic delegates in the .NET Framework with a discussion of three more, somewhat less used, generic delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>. These are older generic delegates that were introduced in .NET 2.0, mostly for use in the Array and List<T> classes.  Though older, it’s good to have an understanding of them and their intended purpose.  In addition, you can feel free to use them yourself, though obviously you can also use the equivalents from the Func family of delegates instead. Predicate<T> – delegate for determining matches The Predicate<T> delegate was a very early delegate developed in the .NET 2.0 Framework to determine if an item was a match for some condition in a List<T> or T[].  The methods that tend to use the Predicate<T> include: Find(), FindAll(), FindLast() Uses the Predicate<T> delegate to finds items, in a list/array of type T, that matches the given predicate. FindIndex(), FindLastIndex() Uses the Predicate<T> delegate to find the index of an item, of in a list/array of type T, that matches the given predicate. The signature of the Predicate<T> delegate (ignoring variance for the moment) is: 1: public delegate bool Predicate<T>(T obj); So, this is a delegate type that supports any method taking an item of type T and returning bool.  In addition, there is a semantic understanding that this predicate is supposed to be examining the item supplied to see if it matches a given criteria. 1: // finds first even number (2) 2: var firstEven = Array.Find(numbers, n => (n % 2) == 0); 3:  4: // finds all odd numbers (1, 3, 5, 7, 9) 5: var allEvens = Array.FindAll(numbers, n => (n % 2) == 1); 6:  7: // find index of first multiple of 5 (4) 8: var firstFiveMultiplePos = Array.FindIndex(numbers, n => (n % 5) == 0); This delegate has typically been succeeded in LINQ by the more general Func family, so that Predicate<T> and Func<T, bool> are logically identical.  Strictly speaking, though, they are different types, so a delegate reference of type Predicate<T> cannot be directly assigned to a delegate reference of type Func<T, bool>, though the same method can be assigned to both. 1: // SUCCESS: the same lambda can be assigned to either 2: Predicate<DateTime> isSameDayPred = dt => dt.Date == DateTime.Today; 3: Func<DateTime, bool> isSameDayFunc = dt => dt.Date == DateTime.Today; 4:  5: // ERROR: once they are assigned to a delegate type, they are strongly 6: // typed and cannot be directly assigned to other delegate types. 7: isSameDayPred = isSameDayFunc; When you assign a method to a delegate, all that is required is that the signature matches.  This is why the same method can be assigned to either delegate type since their signatures are the same.  However, once the method has been assigned to a delegate type, it is now a strongly-typed reference to that delegate type, and it cannot be assigned to a different delegate type (beyond the bounds of variance depending on Framework version, of course). Comparison<T> – delegate for determining order Just as the Predicate<T> generic delegate was birthed to give Array and List<T> the ability to perform type-safe matching, the Comparison<T> was birthed to give them the ability to perform type-safe ordering. The Comparison<T> is used in Array and List<T> for: Sort() A form of the Sort() method that takes a comparison delegate; this is an alternate way to custom sort a list/array from having to define custom IComparer<T> classes. The signature for the Comparison<T> delegate looks like (without variance): 1: public delegate int Comparison<T>(T lhs, T rhs); The goal of this delegate is to compare the left-hand-side to the right-hand-side and return a negative number if the lhs < rhs, zero if they are equal, and a positive number if the lhs > rhs.  Generally speaking, null is considered to be the smallest value of any reference type, so null should always be less than non-null, and two null values should be considered equal. In most sort/ordering methods, you must specify an IComparer<T> if you want to do custom sorting/ordering.  The Array and List<T> types, however, also allow for an alternative Comparison<T> delegate to be used instead, essentially, this lets you perform the custom sort without having to have the custom IComparer<T> class defined. It should be noted, however, that the LINQ OrderBy(), and ThenBy() family of methods do not support the Comparison<T> delegate (though one could easily add their own extension methods to create one, or create an IComparer() factory class that generates one from a Comparison<T>). So, given this delegate, we could use it to perform easy sorts on an Array or List<T> based on custom fields.  Say for example we have a data class called Employee with some basic employee information: 1: public sealed class Employee 2: { 3: public string Name { get; set; } 4: public int Id { get; set; } 5: public double Salary { get; set; } 6: } And say we had a List<Employee> that contained data, such as: 1: var employees = new List<Employee> 2: { 3: new Employee { Name = "John Smith", Id = 2, Salary = 37000.0 }, 4: new Employee { Name = "Jane Doe", Id = 1, Salary = 57000.0 }, 5: new Employee { Name = "John Doe", Id = 5, Salary = 60000.0 }, 6: new Employee { Name = "Jane Smith", Id = 3, Salary = 59000.0 } 7: }; Now, using the Comparison<T> delegate form of Sort() on the List<Employee>, we can sort our list many ways: 1: // sort based on employee ID 2: employees.Sort((lhs, rhs) => Comparer<int>.Default.Compare(lhs.Id, rhs.Id)); 3:  4: // sort based on employee name 5: employees.Sort((lhs, rhs) => string.Compare(lhs.Name, rhs.Name)); 6:  7: // sort based on salary, descending (note switched lhs/rhs order for descending) 8: employees.Sort((lhs, rhs) => Comparer<double>.Default.Compare(rhs.Salary, lhs.Salary)); So again, you could use this older delegate, which has a lot of logical meaning to it’s name, or use a generic delegate such as Func<T, T, int> to implement the same sort of behavior.  All this said, one of the reasons, in my opinion, that Comparison<T> isn’t used too often is that it tends to need complex lambdas, and the LINQ ability to order based on projections is much easier to use, though the Array and List<T> sorts tend to be more efficient if you want to perform in-place ordering. Converter<TInput, TOutput> – delegate to convert elements The Converter<TInput, TOutput> delegate is used by the Array and List<T> delegate to specify how to convert elements from an array/list of one type (TInput) to another type (TOutput).  It is used in an array/list for: ConvertAll() Converts all elements from a List<TInput> / TInput[] to a new List<TOutput> / TOutput[]. The delegate signature for Converter<TInput, TOutput> is very straightforward (ignoring variance): 1: public delegate TOutput Converter<TInput, TOutput>(TInput input); So, this delegate’s job is to taken an input item (of type TInput) and convert it to a return result (of type TOutput).  Again, this is logically equivalent to a newer Func delegate with a signature of Func<TInput, TOutput>.  In fact, the latter is how the LINQ conversion methods are defined. So, we could use the ConvertAll() syntax to convert a List<T> or T[] to different types, such as: 1: // get a list of just employee IDs 2: var empIds = employees.ConvertAll(emp => emp.Id); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.ConvertAll(emp => (int)emp.Salary); Note that the expressions above are logically equivalent to using LINQ’s Select() method, which gives you a lot more power: 1: // get a list of just employee IDs 2: var empIds = employees.Select(emp => emp.Id).ToList(); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.Select(emp => (int)emp.Salary).ToList(); The only difference with using LINQ is that many of the methods (including Select()) are deferred execution, which means that often times they will not perform the conversion for an item until it is requested.  This has both pros and cons in that you gain the benefit of not performing work until it is actually needed, but on the flip side if you want the results now, there is overhead in the behind-the-scenes work that support deferred execution (it’s supported by the yield return / yield break keywords in C# which define iterators that maintain current state information). In general, the new LINQ syntax is preferred, but the older Array and List<T> ConvertAll() methods are still around, as is the Converter<TInput, TOutput> delegate. Sidebar: Variance support update in .NET 4.0 Just like our descriptions of Func and Action, these three early generic delegates also support more variance in assignment as of .NET 4.0.  Their new signatures are: 1: // comparison is contravariant on type being compared 2: public delegate int Comparison<in T>(T lhs, T rhs); 3:  4: // converter is contravariant on input and covariant on output 5: public delegate TOutput Contravariant<in TInput, out TOutput>(TInput input); 6:  7: // predicate is contravariant on input 8: public delegate bool Predicate<in T>(T obj); Thus these delegates can now be assigned to delegates allowing for contravariance (going to a more derived type) or covariance (going to a less derived type) based on whether the parameters are input or output, respectively. Summary Today, we wrapped up our generic delegates discussion by looking at three lesser-used delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>.  All three of these tend to be replaced by their more generic Func equivalents in LINQ, but that doesn’t mean you shouldn’t understand what they do or can’t use them for your own code, as they do contain semantic meanings in their names that sometimes get lost in the more generic Func name.   Tweet Technorati Tags: C#,CSharp,.NET,Little Wonders,delegates,generics,Predicate,Converter,Comparison

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Belkin Flip KVM keyboard skip problems

    - by Craig
    I have just bought a Belkin Flip 2 port KVM. Functionally it is almost there except I have a keyboard sticking problem. So if I type the word 'Hello' it will often (about 25% of the time) output 'Hellooooooooooooooooooooo'. If I plug the keyboard directly into the USB on the computer I don't have this problem, only when plugged into the KVM. I feel like it is a USB speed problem. Followup It appears I have the same problem with the mouse, it will jump from one side of the screen to the other as I move it. The mouse is annoying but half as much as the keyboard.

    Read the article

  • ffmpeg, vlc - Unable to find input stream

    - by zozo
    Good day to all... I have some "little" problems with ffserver and ffmpeg... What I need to do is to broadcast a live video. So I got the cam... used vlc and used send stream option. I sent it to 192.168.1.9:64555, which is a virtual machine on the same computer, running centos. On the virtual machine I run the command ffmpeg -i 192.168.1.9:64555 output.mpg. The response is "unable to find file whatever". Can any1 tell me what I did wrong? Thank you and have a great day. Print-screen with error:

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • EBS full device confusion

    - by Mike
    I have a 500GB EBS device (/dev/xvdf) mounted to /vol and all data on the box seems to be writing to /vol correctly (see du output below). For some reason /dev/xvda1 is totally full. Any idea what's going on here? $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 32G 30G 8.0K 100% / udev 34G 8.0K 34G 1% /dev tmpfs 14G 176K 14G 1% /run none 5.0M 0 5.0M 0% /run/lock none 34G 0 34G 0% /run/shm /dev/xvdb 827G 201M 785G 1% /mnt /dev/xvdf 500G 145G 356G 29% /vol $ du -sh * 8.7M bin 18M boot 8.0K dev 5.1M etc 48K home 0 initrd.img 80M lib 4.0K lib64 16K lost+found 4.0K media 20K mnt 4.0K opt 0 proc 40K root 176K run 7.1M sbin 4.0K selinux 4.0K srv 0 sys 4.0K tmp 414M usr 356M var 0 vmlinuz 145G vol

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >