Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 301/1118 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • When module calling gets ugly

    - by Pete
    Has this ever happened to you? You've got a suite of well designed, single-responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which "wire" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: FooInput fooIn = new FooInput(1, 2); FooOutput fooOutput = fooModule(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); BarInput barIn = new BarInput( runtimevalue, fooOutput.someOtherValue); BarOutput barOut = barModule(BarIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); FooInput fooIn = new FooInput(1, 2); BarOutput barOut = fooBarConnector(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?

    Read the article

  • CUDA & MSI GT60 with Optimus enabled GTX670M?

    - by user1076693
    I have a MSI GT60 Laptop with an Optimus enabled GTX 670M GPU, and I have been trying to get CUDA going in Ubuntu 12.04 environment. I realize that Optimus is not supported in Linux, but I have read the following post suggesting that CUDA works for hybrid GPUs. How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? I installed the NVIDIA driver via sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current The resulting driver version is 302.17, and supposedly GTX 670M is supported since 295.59. I also downloaded CUDA 4.2 from the NVIDIA site, and compiled it against nvidia-current libraries. Unfortunately, when I run deviceQuery in the CUDA SDK, I get the following output cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected Checking /proc/driver/nvidia/gpus/0/information gives the following Model: GeForce GTX 670M IRQ: 16 GPU UUID: GPU-????????-????-????-????-???????????? Video BIOS: ??.??.??.??.?? Bus Type: PCI-E DMA Size: 32 bits DMA Mask: 0xffffffffff Bus Location: 0000:01.00.0 Here is the output of "lspci | grep VGA" 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1213 (rev ff) So... what am I doing wrong? Thanks!

    Read the article

  • "sudo apt -get install foo-" causes removing foo package and everything depends on it

    - by M.Elmi
    While working in command prompt, I accidentally typed following command: sudo apt-get install python3- and ubuntu started removing python3 and everything depends on it (including firefox and much more). Fortunately I closed that terminal immediately and reverted everything by checking dpkg log file, but I was wondering why an install command should act like remove? Is it a bug? Consider the situation that you are looking for a package name (pressing Tab twice) and going through possibilities by pressing Enter key and those Enter keys remain in the keyboard buffer and.... youhaaaa... apt-get is removing the entire installation in front of your eyes.

    Read the article

  • Powershell script to append an extension to a file, input from CSV

    - by Jeremy
    Hi All, All I need is to have an Excel list of file paths and use Powershell to append (not replace) the same extension on to each file. It should be very simple, right? The problem I'm seeing is that if I go input-csv -path myfile.csv | write-host I get the following output: @{FullName=C:\Users\jpalumbo\test\appendto.me} @{FullName=C:\Users\jpalumbo\test\append_list.csv} @{FullName=C:\Users\jpalumbo\test\leavemealone.txt} In other words it looks like it's outputting the CSV "formatting" as well. However if I just issue import-csv -path myfile.csv, the output is what I expect: FullName -------- C:\Users\jpalumbo\test\appendto.me C:\Users\jpalumbo\test\append_list.csv C:\Users\jpalumbo\test\leavemealone.txt Clearly there's no file called "@{FullName=C:\Users\jpalumbo\test\leavemealone.txt}" and a rename on that won't work, so I'm not sure how to best get this data out of the import-csv command, or whether to store it in an object, or what. Thanks!!

    Read the article

  • Where does the information shown by OS X Terminal 'Display all commands' feature come from?

    - by Sergio Acosta
    I just learned that if you hit and hold ESC while on the Mac Terminal, a prompt appears after a few seconds offering to show every command available on your system, including aliases, built-ins, and executables on your PATH. Soruce: http://www.mactricksandtips.com/2008/05/list-all-possible-terminal-commands.html However, the output is show through a more filter, and I cannot grep it or pipe it to another command. Does anyone know how this magic output is generated? Is it just generated on the fly by Terminal? Is there a bash command that can be called explicitly on the command line and get the same result? It is mostly curiosity, but I would love to be able to get the results as text I can post-process and not just browse on screen.

    Read the article

  • iptables: built-in INPUT chain in nat table?

    - by ughmandaem
    I have a Gentoo Linux system running linux 2.6.38-rc8. I also have a machine running Ubuntu with linux 2.6.35-27. I also have a virtual machine running Debian Unstable with linux 2.6.37-2. On the Gentoo and Debian systems I have an INPUT chain built into my nat table in addition to PREROUTING, OUTPUT, and POSTROUTING. On Ubuntu, I only have PREROUTING, OUTPUT, and POSTROUTING. I am able to use this INPUT chain to use SNAT to modify the source of a packet that is destined to the local machine (imagine simulating an incoming spoofed IP to a local application or just to test a virtual host configuration). This is possible with 2 firewall rules on Gentoo and Debian but seemingly not so on Ubuntu. I looked around for documentation on changes to the SNAT target and the INPUT chain of the nat table and I couldn't find anything. Does anyone know if this is a configuration issue or is it something that was just added in more recent versions of linux?

    Read the article

  • Cron is running but not outputting data

    - by Youri
    I'm trying to make my Amazon EC2 instances stop and start by a crontab. EC2 Api tools is succesfully installed. Manually it works. The cron (which I put in with the command crontab -e): 10 * * * * ubuntu /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log The file /tmp/ec2.log is created. When I use the command grep CRON /var/log/syslog I see the cron has actually run. I don't get any output in the /tmp/ec2.log file though. I have set all the amazon variables needed. Even if I on purpose create a wrong cron, like this: 10 * * * * ubuntu /usr/bin/ec2-stop-instancwweqes [instanceid] /tmp/ec2.log I get no output in the file. Shouldn't there be an error? I also tried not defining the user: 10 * * * * /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log And direct command: 10 * * * * ubuntu ec2-stop-instances [instanceid] /tmp/ec2.log Can someone please help me. If I can somehow debug, I can get to the solution. Thanks in advance.

    Read the article

  • How come Indiegogo links shared on G+ link to their page instead of displaying URL?

    - by Ivan Vucica
    If an Indiegogo link, such as this one, gets shared on G+, their G+ page is displayed in the post in the place where commonly the URL would be displayed. I've tried looking analyzing the HTML, but came up empty handed: there's Twitter cards metadata, there's OpenGraph, there is a G+ button -- but I found nothing that links to Indiegogo's page, not even rel="publisher". So, how does Indiegogo achieve this?

    Read the article

  • Multiple OpenSSL vulnerabilities in Sun SPARC Enterprise M-series XCP Firmware

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2008-5077 Improper Input Validation vulnerability 5.8 OpenSSL in XCP1113 Firmware Sun SPARC Enterprise M3000 SPARC: 14216085 Sun SPARC Enterprise M4000 SPARC: 14216091 Sun SPARC Enterprise M5000 SPARC: 14216093 Sun SPARC Enterprise M8000 SPARC: 14216096 Sun SPARC Enterprise M9000 SPARC: 14216098 CVE-2008-7270 Cryptographic Issues vulnerability 4.3 CVE-2009-0590 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 5.0 CVE-2009-3245 Improper Input Validation vulnerability 10.0 CVE-2010-4180 Cipher suite downgrade vulnerability 4.3 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files

    - by yegor
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • A terminal emulator for ex-Windows users

    - by Dan
    There are several things I would like to be better in Ubuntu Terminal Emulator. coloring, like in the source code Copy and paste keyboard shortcuts that I used all the time in Windows: Ctrl-C and Ctrl-V (Most of people here in Ubuntu use Ctrl+C and Ctrl+V keyboard shortcuts to copy and paste everywhere except the terminal! I think it's annoying for newcomers, and I don't worry about historical reasons) A feature to save all the output to log file UPDATE: Can the terminal be a powerful feature-full user-friendly tool like a modern IDE? The Linux user can spend 30% of time in the terminal. Programmers no longer code in a notepad. Can I see the history pane? Suggestions? Directory pane? Commands list? Search for words in an output? Contextual behavior? "Search in Google" for a mouse right-click. Tips and tricks learning? Time is money! Please, people, give me a link to the 21st - century terminal.

    Read the article

  • How do I get an Canon Pixma MP150 to print?

    - by Radu Erdei
    I succesfully installed my Canon Pixma MP150 printer (and scanner) in Ubuntu 12.04, made it the default printer, but i cannot print anything. Watching the printing queue, i see that the printer receives my documents but just for a few seconds after which the queue gets empty without anything getting actually printed. I tried to print from large pdf's to quite tiny txt files. I reinstalled the printer from cups web-based interface (127.0.0.1:631) but again, no luck. Any ideea on the matter?

    Read the article

  • Backup broken PostgreSQL 8.4 without pg_dump

    - by Daniil
    So. I have a problem. PostgreSQL 8.4 won't start or restart without any output given. But it worked for 3 monthes until hosting provider doesn't rebooted server. Now it is completly broken. It wan't start and doesn't give any output or log. pg_dump: [archiver (db)] connection to database "postgres" failed: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now I want to backup (or just start pgsql socket) my database to reinstall postgesql. How?

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • How to force Multiple Monitors correct resolutions for LightDM?

    - by Hanynowsky
    I am affected by the BUG: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/874241 Otherwise, if like me you have a laptop connected to a second monitor of higher resolution, LIGHTDM at the login stage, mirrors the displays in both screens and assign to them a common resolution (1024X768) in my case, instead of extending the desktop (Primary screen with the greeter and secondary with just a logo as mentioned in the Multiple Monitors UX specifications book for 12.04). Here is my xrandr -q @L502X:~$ xrandr -q Screen 0: minimum 320 x 200, current 1920 x 1848, maximum 8192 x 8192 LVDS1 connected 1366x768+309+1080 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm 1920x1080 60.0*+ 1600x1200 60.0 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1280x800 59.8 1024x768 60.0 800x600 60.3 56.2 640x480 60.0 DP1 disconnected (normal left inverted right x axis y axis) I tried to force lightdm to execute some xrandr commands in order to set the right resolution for each monitor and extend the desktop, but I get a LOW GRAPHICS MODE ERROR (You're running in low graphics mode, your screen, input devices...did not get detected..) I created a simple script named lightdmxrand.sh: #!/bin/sh xrandr --output HDMI1 --primary --mode 1920x1080 --output LVDS1 --mode 1366x768 --below HDMI1 And told lightdm to run it : /etc/lightdm/lightdm.conf [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-setup-script=/usr/bin/numlockx on display-setup-script=/home/hanynowsky/lightdmxrandr.sh Someone knows what is wrong!? Thanks in advance.

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • recommended way to collect email notifications from crond in Arch Linux

    - by nponeccop
    Arch Linux doesn't have sendmail installed by default. So I get the following messages in my syslog: Sep 15 13:16:01 zorro crond[18497]: mailing cron output for user collectors sh cronjob.sh Sep 15 13:16:01 zorro crond[18497]: unable to exec /usr/sbin/sendmail: cron output for user collectors sh cronjob.sh to /dev/null What is the recommended way to fix this default behaviour so actual messages are sent? heirloom-mailx is installed and capable of sending email messages using SMTP. Is it possible for crond to use mailx to send notifications? Is there any drop-in replacement for sendmail that sends using mailx? Sendmail is not even in the repositories.

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • UDF Partition reported full when it is not

    - by Capt.Nemo
    I was using these instructions to setup an external hard disk with udf. I have been able to setup a multi-partition system using those instructions, but I seem to have hit a wall, where the partition is reported as full while writing to the disk. Every other tool available to me reports it as free. Relevant lshw output Here's a screenshot showing the disk: Both the output of df and the file manager (caja) report the disk as free. Filesystem Size Used Avail Use% Mounted on /dev/sda9 9.0G 7.6G 910M 90% / udev 974M 12K 974M 1% /dev /dev/sda1 50G 47G 295M 100% /media/Data /dev/sda6 49G 41G 5.9G 88% /home /dev/sda2 155G 127G 29G 82% /media/Entertainment /dev/sda8 14G 13G 516M 96% /media/Stuff /dev/sdb2 120G 1.9G 112G 2% /media/3c887659-5676-4946-875b-b797be508ce7 /dev/sdb3 11G 2.6G 7.7G 25% /media/108b0a1d-fd1a-4f38-b1c6-4ad1a20e34a3 /dev/sdb1 802G 34G 768G 5% /media/disk I seem to have hit a wall near the 35GB mark. Despite being shown as 35gb/860gb used everywhere, the following happens on a write attempt: [2017][/media/Dory]$ echo D>>echo bash: echo: write error: No space left on device Writing byte by byte, the maximum I can take it to is 34719248K. The most weird part is that on mounting it Windows, Windows can write to the disk easily, and the writes are being read fine back in Ubuntu. However, the used-bytes remains at 34719248K in Ubuntu (It goes higher on Windows, however).

    Read the article

  • How to make the emacs not to pop up a window when using tab-completion?

    - by Jinx
    When I use the emacs shell mode or in gdb, when I type double tab, the emacs pop up a new window which always cover an existed window. While in terminal, when I type double tab, to complete a directory, the terminal just print all the candidates in the same window. Can I make the emacs not to pop up a new window when I use this feature? edit this is I wanna do , but it's wrong, can somebody fix this? ;remove annoying poped-up windows (defun rm-popup-window () (other-window) (kill-this-buffer) (other-window) ) (global-set-key [C-'] 'rm-popup-window);

    Read the article

  • Diskless with Ubuntu 12.04

    - by user139462
    I'm trying to setup a new diskless solution with ubuntu 12.04 without any success. I followed this howto: https://help.ubuntu.com/community/DisklessUbuntuHowto But the initramfs seems not to be able to mount my nfs share. On my server side: My /etc/exports /srv/nfs4 192.168.0.0/24(fsid=0,rw,no_subtree_check) /srv/nfs4/nfsroot 192.168.0.0/24(rw,no_root_squash,no_subtree_check,fsid=1,nohide,insecure,sync) I'm able to mount my nfs share on standard Ubuntu installation without any problem. I can mount my nfs on any client with those commands: mount 192.168.0.3:/nfsroot /mnt or mount 192.168.0.3:/srv/nfs4/nfsroot /mnt My /tftpboot/pxelinux.cfg/default config file is DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/nfsroot ip=dhcp rw I also tried DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/srv/nfs4/nfsroot ip=dhcp rw. What I got in initramfs: With the setting [nfsroot=192.168.0.3:/nfsroot] Diskless output: mount call failed - server replied: Permission denied On Syslog of my nfs server: rpc.mountd[1266]: refused mount request from 192.168.0.10 for /nfsroot (/): not exported With the setting [nfsroot=192.168.0.3:/srv/nfs4/nfsroot] Diskless output: mount: the kernel lacks NFS v3 support On Syslog of my nfs server I got: Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: authenticated mount request from 192.168.0.10:834 for /srv/nfs4/nfsroot (/srv/nfs4/nfsroot) Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: refused unmount request from 192.168.0.10 for /root (/): not exported

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Serialize plain clean XML in .NET

    - by Jon Canning
    public static string ToXml<T>(this T obj) where T : class         {             using (var stringWriter = new StringWriter())             {                 var xmlWriterSettings = new XmlWriterSettings { OmitXmlDeclaration = true };                 using (var xmlWriter = XmlWriter.Create(stringWriter, xmlWriterSettings))                 {                     var xmlSerializerNamespaces = new XmlSerializerNamespaces(new[] { XmlQualifiedName.Empty });                     var xmlSerializer = new XmlSerializer(typeof(T));                     xmlSerializer.Serialize(xmlWriter, obj, xmlSerializerNamespaces);                 }                 return stringWriter.ToString();             }         }

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Creating an update method in a different class

    - by Sweta Dwivedi
    I have created a class called 3D model which will animate my 3D model by changing the model position according to the values based in a .txt file through a list... Since i'm using a foreach loop to read the point values when it reaches the end of the file.. XNA throws an out of bounds exception .. (which is obvious) but if i add the same code in my Game.cs update(gameTime) method.. then i dont have this problem..Any idea how to make my 3D model update work same as the update in game.cs .. Here is the code for some idea: public void patterns(GameTime gameTime) { motion_z = new List<Point3D>(); if (pattern == 1) { f = "E:/Motion_Track-output/Output1.txt"; } if (pattern == 2) { f = "E:/Motion_Track-output/cruse.txt"; } // TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; //Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int)Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int)Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); int z = (int)Math.Floor(((float.Parse(temp[2]) / 4 * 20000))); motion_z.Add(new Point3D(x, y, z)); } modelPosition.X = (float)(motion_z[i].X); modelPosition.Y = (float)(motion_z[i].Y); modelPosition.Z = (float)(motion_z[i].Z); i++; } //Console.WriteLine("modelposX:" + modelPosition.X + "," + "motionzX:" + motion_z[i].X); }

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >