Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 536/903 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Force failover a Cisco ASA

    - by user974896
    I have two ASA in a lan state primary\secondary configuration. None of them have "failover active" or "no failover active" in their configuration. Would it be proper to failover in a manner such as: Log into console of primary unit and issue "failover lan state secondary", log into the console of the original secondary unit and issue "failover lan state primary". To fail back simply reverse the process or Log into the console of the primary unit and issue "no failover active", log into the console of the original secondary unit and issue "failover active". To fail back issue "failover active" on the original primary (now secondary) unit, and "no failover active" on the now primary unit. I do not like the second method because it adds configuration directives that were not in place before. Will the first method work?

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • Can I make ssh tell me which control file it would use for multiplexing?

    - by Ryan Thompson
    I am using the following options in my ~/.ssh/config in order to enable connection multiplexing: ControlMaster auto ControlPath ~/.ssh/control/master-%r@%h:%p However, this has the annoying problem that the first shell to connect to a particular server must be the last to disconnect, because it is the master connection that all the other connections are using. So if you log out of the master, it appears to just hang. To solve this, I would like to wrap ssh with a script that checks if the control master file exists, and if not, starts a master ssh process in the background. Then it would start a slave ssh session. In order to accomplish this, my script would have to determine the path to the control file that ssh would use. This would entail parsing the ssh command line options and config files and implementing the logic for determining the ControlPath. Is there any way to just ask ssh what path it would use, so I can check it?

    Read the article

  • How to use launchd to ensure an application is running?

    - by zneak
    Hello guys, I use Quicksilver, which is a nifty program that really speeds up my work. I've set it to launch when I log in. However, since Snow Leopard, it's become a bit less stable, and once in a while, it crashes. And obviously, it won't restart by itself. I'm pretty sure I can use launchd to ensure it's always running when I'm logged in. Is there a good guide/example of how to make sure a process restarts when it's killed/terminated/crashed with it?

    Read the article

  • Openvpn issue with linux

    - by catsy
    So I've tried to setup openvpn, I followed some guide but it's stuck att "initialization sequence completed" with no connection and I can't find any working solution... here's the log: $Sun Sep 23 19:14:32 2012 OpenVPN 2.1.0 i486-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Jul 20 2010 Enter Auth Username:pumpedup Enter Auth Password: Sun Sep 23 19:14:37 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Sun Sep 23 19:14:37 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sun Sep 23 19:14:37 2012 LZO compression initialized Sun Sep 23 19:14:37 2012 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Sun Sep 23 19:14:38 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Sun Sep 23 19:14:38 2012 Local Options hash (VER=V4): '41690919' Sun Sep 23 19:14:38 2012 Expected Remote Options hash (VER=V4): '530fdded' Sun Sep 23 19:14:38 2012 Socket Buffers: R=[163840-131072] S=[163840-131072] Sun Sep 23 19:14:38 2012 UDPv4 link local: [undef] Sun Sep 23 19:14:38 2012 UDPv4 link remote: [AF_INET]192.162.102.162:1194 Sun Sep 23 19:14:38 2012 TLS: Initial packet from [AF_INET]192.162.102.162:1194, sid=87a95723 a6d7b7f9 Sun Sep 23 19:14:38 2012 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Sun Sep 23 19:14:38 2012 VERIFY OK: depth=1, /C=NV/ST=NV/L=nVPN/O=nVpn/CN=nVpn_CA/[email protected] Sun Sep 23 19:14:38 2012 VERIFY OK: depth=0, /C=NV/ST=NV/L=nVPN/O=nVpn/CN=server/[email protected] Sun Sep 23 19:14:39 2012 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1542', remote='link-mtu 6042' Sun Sep 23 19:14:39 2012 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1500', remote='tun-mtu 6000' Sun Sep 23 19:14:39 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Sun Sep 23 19:14:39 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sun Sep 23 19:14:39 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Sun Sep 23 19:14:39 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sun Sep 23 19:14:39 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Sun Sep 23 19:14:39 2012 [server] Peer Connection Initiated with [AF_INET]192.162.102.162:1194 Sun Sep 23 19:14:41 2012 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Sun Sep 23 19:14:41 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 8.8.8.8,dhcp-option DNS 8.8.8.8,route 10.102.162.1,topology net30,ping 10,ping-restart 120,ifconfig 10.102.162.6 10.102.162.5' Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: timers and/or timeouts modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: --ifconfig/up options modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: route options modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Sun Sep 23 19:14:41 2012 ROUTE default_gateway=10.0.2.2 Sun Sep 23 19:14:41 2012 TUN/TAP device tun0 opened Sun Sep 23 19:14:41 2012 TUN/TAP TX queue length set to 100 Sun Sep 23 19:14:41 2012 /sbin/ifconfig tun0 10.102.162.6 pointopoint 10.102.162.5 mtu 1500 Sun Sep 23 19:14:41 2012 /sbin/route add -net 192.162.102.162 netmask 255.255.255.255 gw 10.0.2.2 Sun Sep 23 19:14:41 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 /sbin/route add -net 10.102.162.1 netmask 255.255.255.255 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 Initialization Sequence Completed

    Read the article

  • Order in which passphrase is asked for encrypted volumes

    - by Lars Kotthoff
    I have installed 12.10 on a machine with two disks. The root partition is on one disk, the swap partition on the other. Both disks are encrypted and I have added the corresponding entries to /etc/crypttab. During boot, it asks for the passphrase for the disk with the root filesystem. Then it continues booting and gets to the login screen before I get a chance to enter the passphrase for the other disk. After logging in, I verified that it was actually waiting for me to enter the passphrase for that second partition (askpass process is running). But at that point, I have no way of entering the passphrase anymore. The manpage for crypttab suggests that the order in which the volumes are specified matters, so I changed it to have the swap disk first. I updated the initramfs and grub afterwards, but it didn't make any difference. How can I specify the order in which the encrypted partitions are unlocked? I'm looking for a solution that either asks for the swap passphrase first or tells the system to wait until all encrypted partitions are unlocked before displaying the login screen.

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Bad archive mirror using PXE boot method

    - by user11566
    I'm trying to automatically install Ubuntu on a client PC by using the PXE BOOT method....my Objectives are below: I am following the steps given in this link installation using PXE BOOT the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server I have installed DHCP3-server,Apache2 and TFTP to help me with the installation. I have nearly achieved my first objective, I am able to boot my client using the files stored in the server but during the installation stage it is asking me to CHOOSE A MIRROR OF UBUNTU ARCHIVE I gave the server's IP address and the path in the server where the files are located but then its giving me this error BAD ARCHIVE MIRROR So is it possible that instead of downloading all the files from the internet and storing them on my disk can I use the files which comes with the UBUNTU-CD, and how to store these files in what format (should I zip them) on the disk? secondly I am also generating the ks.cfg which I wanted to give to the client for automatic installation of the OS. So how should the configuration file be given to the installation process?

    Read the article

  • Should I reformat XP with: Quick, regular, or "the current file-system"?

    - by Julie
    When reformatting, Windows XP ask me to choose from these formatting methods. (Implying that ALL of them are "formatting methods"... even #3) Reformat using NTFS (quick) Reformat using NTFS Leave the current file-system intact (no changes) What choice #3 really mean? Does it mean: A. Leave the current file-system (whatever file-system is already in use) and reformat to match that. (ie. If you current have NTFS, reformat to that again. If you currently have FAT32, reformat to that again. That is: Reformat without changing to a different file-system. Leave the current type.) or... B. Do absolutely nothing. Don't format. Don't delete any of my files. Abort the formatting process entirely.

    Read the article

  • Logging with Resource Monitor?

    - by Jay White
    I am having sudden spikes in disk read activity, which can tie up my system for a few seconds at a time. I would like to figure out the cause of this before I set my machine to go live. With Performance Monitor I know I can log activity, but this does not show me individual processes that cause a spike. Resource Monitor allows me to see processes, but I have no way to keep logs. It seems unless I have Resource Monitor open at the time of a spike, I will not be able to identify the process causing the spike. Can someone suggest a way to log with Resource Monitor, or an alternative tool that can?

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • How to build Visual Studio Setup projects (.vdproj) with TFS 2010 Build ?

    - by Vishal
    Out of the box, Team Foundation Server 2010 Build does not support building of setup projects (.vdproj). Although, you can modify DefaultTemplate.xaml or create your own in order to achieve this. I had to try bunch of different blog post's and finally got it working with a mixture of all those posts.   Since you don’t have to go through this pain again, I have uploaded the Template which you can use right away : https://skydrive.live.com/redir?resid=65B2671F6B93CDE9!310 Download and CheckIn this template into your source control. Modify your Build Definition to use this template. Unless you have CheckedIn the template, it wont show up in the template selection section in the process task of build definition. In your Visual Studio Solution Configuration Manager, make sure you specify to build the setup project also. You might get this warning in your build result: “The project file “*.vdproj” is not support by MSBuild and cannot be build. Hope it helps. Thanks, Vishal Mody Reference blog posts I had used: http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx http://donovanbrown.com/post/I-need-to-build-a-project-that-is-not-supported-by-MSBuild.aspx http://lajak.wordpress.com/2011/02/19/build-biztalk-deployment-framework-projects-using-tfs2010/

    Read the article

  • Creating a vm using Hyper-V causes the host of BSOD

    - by Arcass
    Hi, Problem description: When I try to create a virtual machine, the host bsod part way through the process. From the logs in lookes to fail/hang on the "Creating new VirtualDisckDriver with new VHD" step. The BSOD error code is SYSTEM_SERVICE_EXCEPTION : STOP:0x0000003B When the machine has finished restarting, it looks to have created the vhd and XML files for the vm but it isn't accessable. I have two server bothing behaving in exactly the same way, so I don't believe it's a hardware fault. Has anyone had a similar experince? How did you resolve the problem? NOTES Hardware: HP DL380 G6 BIOS : 2010.03.30 (14 Apr 2010) [Latest from HP website] Inter Hyperthreading: Disabled Intel Virtuazation Technology : Enabled No-Execute Memory Protection: Enabled Mem check reports no errors OS: Windows 2008 Sp2 x64bit fully updated regards Arcass

    Read the article

  • Get songs off of Windows iPod and onto a Mac

    - by Justing
    I have been using iTunes on a Dell running Windows XP to sync about 60 gigs of songs with my old school iPod. Well, the Dell's hard drive died the other day, and the only place that I have my music is on the iPod (I do have the CDs, but really don't want to have to re-rip 60 gigs of music). So, now I have a shiny new MacBook Pro. Is there a way to get my songs off of my iPod and onto the MacBook? I googled, and found Senuti. But, I'm leery of accidentally formating the iPod and losing my songs, and I can't tell if it is Snow Leopard compatible yet. Has anyone recently gone through this process? Please provide suggestions for copying songs from an iPod formated to work with Windows onto a MacBook Pro running Snow Leopard. Thanks!

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

  • Why is IIS 7.5 flushing file cache very often?

    - by Steffen
    We're running a Win 2008 R2 server with IIS 7.5 for serving image files. It's only used for static content, and file caching has been set up to cache files for 10 minutes. However the IIS frequently completely flushes the cache (seen by using Perfmon) It's not application pool recycling, it's not because the TTL has expired, so now I'm at a loss :-( I've included a screenshot of the perfmon graph where you can clearly see the issue. Is there anywhere I can see WHY it's doing these flushes ? (Note: I'm aware I could maybe detect it by attaching a debugger to the process, but that's not an option because it's a production server, and it cannot handle the slowdown a debugger would cause)

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >