Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 684/1118 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • How to handle key in PhP array if the key contains japanese characters [migrated]

    - by Jim Thio
    I have this array: [ID] => ????????-???????????__35.79_139.72 [Email] => [InBuildingAddress] => [Price] => [Street] => [Title] => ???????? ??????????? [Website] => [Zip] => [Rating Star] => 0 [Rating Weight] => 0 [Latitude] => 35.7865334803033 [Longitude] => 139.716800710514 [Building] => [City] => Unknown_Japan [OpeningHour] => [TimeStamp] => 0000-00-00 00:00:00 [CountViews] => 0 Then I do something like this: $output[$info['ID']]=$info; //mess up here $tes=$info['ID']['Title']; Well guess what it messes up. Basically even though the content of an array in PhP can be Japanese. Is this true? What's wrong. The error I got is: Debug Warning: /sdfdsfdf/api/test2.php line 36 - Cannot find element ????????-???????????__35.79_139.72 in variable Debug Warning: /sdfdsfdf/api/test2.php line 36 - main() [function.main]: It is not safe to rely on the system's timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Krasnoyarsk' for '7.0/no DST' instead So many question mark Why is this happening. What's really going on inside PhP? Where can I learn more of such things. Most importantly, what would be the best way to handle this situation. Should I tell PhP to internally always use UTF-8? Does PhP array inherenty cannot use non ascii id?

    Read the article

  • free Raw-File Converter/Editor

    - by RCIX
    I have RAW files output by a program with a specific set of properties (Photoshop RAW, 16 bits, IBM PC byte order, no header, 1 non-interleaved channel, variable sizes like 257X257 or 129X513); does anyone know of a free tool that will allow me to convert to and from this format, and possibly do basic editing (selection, copy/paste, rotation of selection)? I've tried Picasa, XNView, and Paint Shop Pro 7 and none of them work properly. The closest i get is Paint Shop Pro which will at least make a serviceable attempt to open these files but i can't set all of the proper settings. XNView just might be able to edit it if i can figure out how to change the open settings for a particular raw file. So my questions at current are: how do i tell XNView to open a raw file a particular way? Failing that, is there any free tool that can open Photoshop-RAW files with the above settings (that's not photoshop)? If it helps, i'm trying to import/export/edit hieghtmap data for maps for Supreme Commander.

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • Battery not recognized on my laptop (and it recognizes my laptop as a desktop)

    - by AZorin
    I have installed Ubuntu (both 10.10 and 11.04 pre-release) on my laptop but my battery is not recognized and it is detected as a desktop system rather than a laptop. I have tried to get the output of cat /proc/acpi/battery/BAT1/state but the directory doesn't exist. I have tried another guide to paste the battery info into this directory but it doesn't allow me to do that and says that the directory doesn't exist, even though I'm trying to make it. I tried it in root Nautilus and even on an install of Lubuntu (with a root file manager) but it still failed to budge. I really don't know what to do as I have tried all the guides on the internet that I could find. Is there any way to change the configuration file(s) that detect the internal hardware of the computer. The /proc directory is a temporary RAM directory afaik. Is there a directory where that data is stored permanently and where the RAM reads if you know what I mean? Thanks in advance. AZorin This issue has been reported as bug #764513.

    Read the article

  • ATI Radeon HD with Catalyst driver stuck mirroring screens

    - by Mike Axiak
    In 11.10 I replaced my aging Nvidia card with a new Radeon HD 6970 card. The single card has two DVI output ports which I've connected to two monitors. I installed Catalyst version 11.9 and I cannot get multiple monitors set up the way I want. I tried: $ sudo amdcccle and setting the mode to single desktop multiple monitors and whenever I do that Unity crashes and I get back to the login screen. Nothing shows up in the Xorg.*.log files for me to post here. There's only one card so I don't think xinerama would be any help here. Anyone have any ideas? EDIT: Here's my xorg.conf file: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "75" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:5:0:0" EndSection Section "Device" Identifier "amdcccle-Device[5]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:5:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[5]-1" Device "amdcccle-Device[5]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Webcam doesn't work in Ubuntu 12.10

    - by Kzhi
    I have Gembird cam68ut. On my Ubuntu 12.10 it shows black screen in cheese and guvcview. I tested it in win7, it works fine. Here what I found: It is a uvc compliant camera, I checked on the site: 18ec:3299 USB 2.0 PC Camera (model number QC3231) ArkMicro This webcam is report by lsusb: Bus 001 Device 004: ID 18ec:3299 Arkmicro Technologies Inc. Here is the output of dmesg | tail: uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (18ec:3299) uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround. input: USB2.0 PC CAMERA as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5:1.0/input/input17 usbcore: registered new interface driver uvcvideo USB Video Class driver (1.1.1) When I run cheese (or guvcview), here what I get in terminal: libv4l2: error turning on stream: No space left on device (cheese:11797): cheese-WARNING **: Internal data flow error. I tried it on different usb slots with the same results The Webcam's microphone works, I can record audio with it Guys, any thoughts on what can be done to make it work?

    Read the article

  • pppoe connection to dsl model

    - by VJo
    Hello, I am connecting to the internet through a pppoe connection, but for some reason I can not connect to my modem (it's address is 192.168.1.1). Before I set my pppoe connection, I could connect. So, is there a way? EDIT The output of ifconfig is : r@PlaviZec:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:13:d4:f7:02:d4 inet6 addr: fe80::213:d4ff:fef7:2d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2811 errors:0 dropped:0 overruns:0 frame:0 TX packets:2801 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2538831 (2.5 MB) TX bytes:448591 (448.5 KB) Interrupt:21 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:28 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1600 (1.6 KB) TX bytes:1600 (1.6 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:92.229.42.177 P-t-P:213.191.64.59 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:2794 errors:0 dropped:0 overruns:0 frame:0 TX packets:2741 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:2476277 (2.4 MB) TX bytes:381240 (381.2 KB)

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • dpkg in uninterruptible sleep

    - by Khaled
    I have several Ubuntu servers 10.04. Today, I tried to upgrade some packages on one of these servers and the process got stuck. I logged in using another SSH session and I found that dpkg is in D state (uninterruptible sleep). According to what I have read, this state results generally from I/O waiting like waiting for NFS share. I can not understand why dpkg will block in this state. I can not see any obvious problems other than this. Here is the output of ps to show the blocking process: $ ps axo pid,cmd,s,wchan | grep dpkg 22571 /usr/bin/dpkg --status-fd 2 D call_rwsem_down_read_failed This process can not be killed even with kill -9. So, I will not be able to install/upgrade any package unless I reboot the server. What makes it worse is that the remote reboot does not succeed in such a case (having processes in D state). Can anyone help with this? How can I avoid this in the future.

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • Ubuntu sound volume is 40% lower than in Windows

    - by ncomx
    I have 2x 2.1 speakers connected to the computer where I have Ubuntu 12.04 installed. On the software side I've set all the volume controls to 100% with the alsamixer program. The speakers have their own volume control, maintaining those at the same level, and switching between Ubuntu and Windows (XP and 7), on windows the output volume is at least 40% higher, even when having the windows volume control at 50% (without touching the speakers volume control) it's still much higher than the sound on Ubuntu. Why can this be happening? Are there some alternative sound drivers (other than the default ones) I could test to see if it makes a difference? some info about the card: root:$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff4000 irq 55 1 [Generic ]: HDA-Intel - HD-Audio Generic HD-Audio Generic at 0xfbcfc000 irq 56 root:$ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 02:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] I think the one i am using is the Intel one, the other seems to be from the vga card which is an ati radeon 6950. Running gstreamer-properties and switching between alsa, oss, ossv4 and pulseaudio doesn't seem to make any difference.

    Read the article

  • Why don't DNS root servers answer?

    - by JustTrying
    If I try to query a root server with dig, I never receive an answer. For example the output for dig @b.root-servers.net www.ubuntu.com is ; << DiG 9.8.1-P1 << @b.root-servers.net www.ubuntu.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached But if I query other servers (the one of my ISP, or 8.8.8.8), they answer correctly. Why?

    Read the article

  • Sitecore Item Web API and Json.Net Test Drive (Part II –Strongly Typed)

    - by jonel
    In the earlier post I did related to this topic, I have talked about using Json.Net to consume the result of Sitecore Item Web API. In that post, I have used the keyword dynamic to express my intention of consuming the returned json of the API. In this article, I will create some useful classes to write our implementation of consuming the API using strongly-typed. We will start of with the Record class which will hold the top most elements the API will present us. Pretty straight forward class. It has 2 properties to hold the statuscode and the result elements. If you intend to use a different property name in your class from the json property, you can do so by passing a string literal of the json property name to the JsonProperty attribute and name your class property differently. If you look at the earlier post, you will notice that the API returns an array of items that contains all of the Sitecore content item or items and stores them under the result->items array element. To be able to map that array of items, we have to write a collection property and decorate that with the JsonProperty attribute. The JsonItem class is a simple class which will map to the corresponding item property contained in the array. If you notice, these properties are just the basic Sitecore fields. And here’s the main portion of this post that will binds them all together. And here’s the output of this code. In closing, the same result can be achieved using the dynamic keyword or defining classes to map the json propery returned by the Sitecore Item Web API. With a little bit more of coding, you can take advantage of power of strongly-typed solution. Have a good week ahead of you.

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • Debian 5 is randomly shutting down.

    - by revofreak
    My debian 5 vps is suffering from random shutdowns. I reinstalled it several times, the hosts moved me to a different physical box, check the install image and said everyone else also uses it and is fine. Heres the output from syslog Mar 27 00:19:19 noobintraining-1 -- MARK -- Mar 27 00:32:01 noobintraining-1 shutdown[18142]: shutting down for system halt Mar 27 00:32:06 noobintraining-1 init: Switching to runlevel: 0 Mar 27 00:32:06 noobintraining-1 xinetd[15907]: Exiting... Mar 27 00:32:07 noobintraining-1 named[15865]: received control channel command 'stop -p' Mar 27 00:32:07 noobintraining-1 named[15865]: shutting down: flushing changes Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on 127.0.0.1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on ::1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on ::#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 127.0.0.1#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 89.238.172.132#53 Mar 27 00:32:07 noobintraining-1 named[15865]: exiting Mar 27 00:32:07 noobintraining-1 exiting on signal 15 Any help is most appreciated!

    Read the article

  • Repair ext4 filesystem on USB drive

    - by phineas
    Yet another filesystem question. I wanted to use a USB drive that I hadn't mounted for a month or so and was surprised by the fact Ubuntu was unable to mount it. I looked it up in the disk utility and it said it discovered a device with 17 MB instead of 2 GB. The hardware looks intact, I hope for the best for repairing the ext4 filesystem. I followed the instructions from HOWTO: Repair a broken Ext4 Superblock in Ubuntu, but I wasn't successful. # fsck.ext4 -v /dev/sdb e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Filesystem blocks are invalid, however when I run the recommended solution to try the alternate superblock, I get the following output: # e2fsck -b 8193 /dev/sdb e2fsck 1.42.5 (29-Jul-2012) e2fsck: Invalid argument while trying to open /dev/sdb plus the same error message as in the last paragraph above. Any ideas how to recover the drive? Thank you very much! Edit: testdisk won't help. I'm still stunned why the tools only discover 17 MB.

    Read the article

  • Does lighter wallpaper consumes lesser power than a Darker one

    - by Lamb
    My VAIO advised me to switch to a White Background, to reduce power consumption. But it seems contradictory to common logic. Its like saying a brighter Torch consuming less than a dimmer one. Common Logic says that, Screen is BLACK when not using any POWER, so displaying black color should not consume any power (because without powering any pixel it gives black color) Also a white screen gives me more light than a darker one, so it should use more Energy. White Wallpaper = More Light Output = More Electrical Energy Consumed Black Wallpaper = Less Light = Less Electricity Consumed My question is - Is there anything wrong with the above argument ? A Lighter Wallpaper consumes less power than a Darker one. Is it true ? If yes, Why ?

    Read the article

  • puppet restart service failure

    - by Roman
    help me please with service restart # changing iptables` file { "/etc/sysconfig/iptables": ensure => "present", content => template("all_in_one/iptables.erb"), owner => root, group => root, mode => 600, } service { "iptables": ensure => running, enable => true, hasstatus => true, hasrestart => true, subscribe => File["/etc/sysconfig/iptables"] } output is: err Failed to call refresh: Could not restart Service[iptables]: Execution of '/sbin/service iptables restart' returned 1

    Read the article

  • Can't play Steel Storm, Burning Retribution

    - by Goytor
    I've bougth Steel Storm, Burning Retribution in the Software Center, and every time I run it shows the following message: You have reached this menu due to missing or unlocable content/data You may consider adding -base dir /path/to/game to your launch commandline I've gone to main menu in the preferences tab and changed the launcher to no avail. I've tried running it from console, with /opt/steelstorm-episode2/steelstorm, I got: Game is Steel-Storm using base gamedir gamedata Steel-Storm Linux 01:07:07 Jun 11 2011 - release Playing shareware version. Skeletal animation uses SSE code path DPSOFTRAST available (SSE2 instructions detected) Failed to init SDL joystick subsystem: couldn't exec quake.rc couldn't exec default.cfg execing config.cfg couldn't exec autoexec.cfg Client using an automatically assigned port Client opened a socket on address 0.0.0.0:0 Client opened a socket on address [0:0:0:0:0:0:0:0]:0 Linked against SDL version 1.2.12 Using SDL library version 1.2.14 GL_VENDOR: NVIDIA Corporation GL_RENDERER: GeForce 6150SE nForce 430/PCI/SSE2/3DNOW! GL_VERSION: 2.1.2 NVIDIA 270.41.06 vid.support.arb_multisample 1 vid.mode.samples 0 vid.support.gl20shaders 1 Video Mode: fullscreen 640x480x32x0.00hz S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels... Wanted audio Specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 2048 Obtained audio specification: Channels : 2 Format : 0x8010 Frequency : 48000 Samples : 1024 Sound format: 48000Hz, 2 channels, 16 bits per sample CDAudio_Init: No CD in player. Can't get initial CD volume CD Audio Initialized If I try -base /opt/steelstorm-episode2/steelstorm says "command not found".

    Read the article

  • Missing disk space in Windows XP

    - by Jørn Schou-Rode
    On my mother's Lenovo laptop, Windows XP claims that the hard drive is almost full. According to the properties window, 52.7 out of 55.2 GB is in use: By deleting temp files from Internet Explorer, System Restore, Recycle bin, Windows Update, System Cleanup, I managed to free up about one GB. That's still 50 GB in use, which still is a lot more than I expected. Hence, I gave good old WinDirStat a spin, and here's the output: It might be hard to read here, but the first line says that the total amount of disk space in use on drive C is 24.3 GB. So Windows claims usage of 52.7 GB and WinDirStat can only account for 24.3 GB. Where is the other half of that disk space being used? I hope someone has an answer, or some tricks or tips to do further research. UPDATE: The laptop in question has an SSD hard drive. I am aware that these disk (at least the earlier ones) have a limited life-time. Could the symptoms described be caused by wear and tear on the SSD?

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Alternative to the tee command whitout STDOUT

    - by aef
    I'm using | sudo tee FILENAME to be able to write or append to a file for which superuser permissions are required quite often. Although I understand why it is helpful in some situation, that tee also sends its input to STDOUT again, I never ever actually used that part of tee for anything useful. In most situations, this feature only causes my screen to be filled with unwanted jitter, if I don't go the extra step and manually silence it with tee 1> /dev/null. My question: Is there is a command arround, which does exactly the same thing as tee, but does by default not output anything to STDOUT?

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >