Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1549/1620 | < Previous Page | 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556  | Next Page >

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • WinPE, Startnet.CMD and passing variables to second batch file not working

    - by user140892
    I don't know scripting or PowerShell (yes I need to learn something). I'm not an expert batch file maker either. I have a WinPE flash drive which I used to deploy OS images. I have the WIM, drivers and anything needed else outside the WinPE environment to ensure that Updates, changes are easier for me to make. I use the "STARTNET.CMD" batch file which is part of the WinPE. The reason to go through the letter drives is that the WinPE always gets the X letter drive assigned. The flash drive itself can receive a random letter which always changes. My deployment menu is located on the flash drive it self and not inside the WinPE. This is so that if I need to make a change I don't have to re-do the WinPE. I am able to locate the "menu.bat" batch file and launch it. I use a variable to capture the letter drive. I call the second batch file named "menu.bat" and pass the variable to it. When the second batch file loads, I believe that I am calling the variable correctly. If I break out of the batch file I can echo the variable and see the expected reply. The issue is that I can't use the variable to work with anything on the second batch file. In my test, I can get this to work over and over. When it runs from the real USB flash drive it does not work. I removed comments from the second batch file to make it smaller. My issue is that files below all get a message stating that the system cannot find the path specified. Diskpart Imagex.exe bcdboot.exe Why can't I get the varible to properly function when I try to using example "ImageX.exe"? Contents of the Startnet.cmd @echo off for %%p in (a b c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%p:\Tools\ set w=%%p Set execpatch=%w%\Tools\ call %w%:\Menu.bat \Tools\ Contents of the Menu.BAT @echo off set SecondPath=%1 cls :Start cls Echo. Echo.============================================================== Echo. Windows 7 64 Bit Ent Basic Desktops Echo.============================================================== Echo. Echo A. 790 Windows 7 - Basic Echo. Echo. Echo I. Exit Echo. Echo. set /p choice=Choose your option = if not '%choice%'=='' set choice=%choice:~0,1% if '%choice%'=='a' goto 790_Windows_7_Basic echo "%choice%" is not a valid (answer/command) echo. goto start :790_Windows_7_Basic REM DISKPART /s %SecondPath%BatchFiles\Make-Partition.txt %SecondPath%imagex.exe /apply %SecondPath%Images\Win7-64b-Ent-Basic-SysPreped.wim 1 o:\ /verify %SecondPath%bcdboot.exe o:\Windows /s S: Copy %SecondPath%Unattended\unattend.XML o:\Windows\System32\sysprep\unattend.XML /y xcopy %SecondPath%Drivers\790\*.* o:\Windows\INF\790\ /E /Q /Y MD o:\Windows\Setup\Scripts\ Copy %SecondPath%BatchFiles\SetupComplete.cmd o:\Windows\Setup\Scripts\ /y Goto Done :Done Exit

    Read the article

  • Automating custom software installation in a zone

    - by mgerdts
    In Solaris 11, the internals of zone installation are quite different than they were in Solaris 10.  This difference allows the administrator far greater control of what software is installed in a zone.  The rules in Solaris 10 are simple and inflexible: if it is installed in the global zone and is not specifically excluded by package metadata from being installed in a zone, it is installed in the zone.  In Solaris 11, the rules are still simple, but are much more flexible:  the packages you tell it to install and the packages on which they depend will be installed. So, where does the default list of packages come from?  From the AI (auto installer) manifest, of course.  The default AI manifest is /usr/share/auto_install/manifest/zone_default.xml.  Within that file you will find:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data> So, the default installation will install pkg:/group/system/solaris-small-server.  Cool.  What is that?  You can figure out what is in the package by looking for it in the repository with your web browser (click the manifest link), or use pkg(1).  In this case, it is a group package (pkg:/group/), so we know that it just has a bunch of dependencies to name the packages that really wants installed. $ pkg contents -t depend -o fmri -s fmri -r solaris-small-server FMRI compress/bzip2 compress/gzip compress/p7zip ... terminal/luit terminal/resize text/doctools text/doctools/ja text/less text/spelling-utilities web/wget If you would like to see the entire manifest from the command line, use pkg contents -r -m solaris-small-server. Let's suppose that you want to install a zone that also has mercurial and a full-fledged installation of vim rather than just the minimal vim-core that is part of solaris-small-server.  That's pretty easy. First, copy the default AI manifest somewhere where you will edit it and make it writable. # cp /usr/share/auto_install/manifest/zone_default.xml ~/myzone-ai.xml # chmod 644 ~/myzone-ai.xml Next, edit the file, changing the software_data section as follows:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>                 <name>pkg:/developer/versioning/mercurial</name>                <name>pkg:/editor/vim</name>             </software_data> To figure out  the names of the packages, either search the repository using your browser, or use a command like pkg search hg. Now we are all ready to install the zone.  If it has not yet been configured, that must be done as well. # zonecfg -z myzone 'create; set zonepath=/zones/myzone' # zoneadm -z myzone install -m ~/myzone-ai.xml A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111113T004303Z.myzone.install Image: Preparing at /zones/myzone/root. Install Log: /system/volatile/install.15496/install_log AI Manifest: /tmp/manifest.xml.XfaWpE SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting ... Creating IPS image Installing packages from: solaris origin: http://localhost:1008/solaris/54453f3545de891d4daa841ddb3c844fe8804f55/ DOWNLOAD PKGS FILES XFER (MB) Completed 169/169 34047/34047 185.6/185.6 PHASE ACTIONS Install Phase 46498/46498 PHASE ITEMS Package State Update Phase 169/169 Image State Update Phase 2/2 Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 531.813 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20111113T004303Z.myzone.install Now, for a few things that I've seen people trip over: Ignore that bit about man pages - it's wrong.  Man pages are already installed so long as the right facet is set properly.  And that's a topic for another blog entry. If you boot the zone then just use zlogin myzone, you will see that services you care about haven't started and that svc:/milestone/config:default is starting.  That is because you have not yet logged into the console with zlogin -C myzone. If the zone has been booted for more than a very short while when you first connect to the zone console, it will seem like the console is hung.  That's not really the case - hit ^L (control-L) to refresh the sysconfig(1M) screen that is prompting you for information.

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • How can I read kindle book under xfce(ubuntu)? (using chromebook)(wine not working)

    - by yshn
    I'm using chromebook, dual booting xfce(ubuntu) and cr os. The ebook I bought on amazon is not supported on kindle cloud reader. (Under xfce)I downloaded wine and tried installing kindle for pc under wine, and after couples of times of trials, it always said installation error and could not install kindle, and it's been giving me: Unhandled exception: unimplemented function msvcp90.dll.??0?$basic_ofstream@DU?$char_traits@D@std@@@std@@QAE@XZ called in 32-bit code (0x7b839cf2). Register dump: CS:0023 SS:002b DS:002b ES:002b FS:0063 GS:006b EIP:7b839cf2 ESP:0033fcd4 EBP:0033fd38 EFLAGS:00000287( - -- I S - -P-C) EAX:7b826245 EBX:7b894ff4 ECX:00000008 EDX:0033fcf4 ESI:80000100 EDI:00dca568 Stack dump: 0x0033fcd4: 0033fd58 00000008 00000030 80000100 0x0033fce4: 00000001 00000000 7b839cf2 00000002 0x0033fcf4: 7e24b340 7e24f2ca 0000000d 00110000 0x0033fd04: 7bc47a0d 7e1dbff4 7e1417f0 00dca568 0x0033fd14: 0033fd24 7bc65d0b 00110000 00000000 0x0033fd24: 0033fd44 7e141801 7b839caa 7e1dbff4 000c: sel=0067 base=00000000 limit=00000000 16-bit r-x Backtrace: =0 0x7b839cf2 in kernel32 (+0x29cf2) (0x0033fd38) 1 0x7e24b2a8 in msvcp90 (+0x3b2a7) (0x0033fd68) 2 0x7e216c9d in msvcp90 (+0x6c9c) (0x0033fde8) 3 0x00938fdd in kindle (+0x538fdc) (0x0033fde8) 4 0x0089dc71 in kindle (+0x49dc70) (0x0033fe70) 5 0x7b859cdc call_process_entry+0xb() in kernel32 (0x0033fe88) 6 0x7b85af4f in kernel32 (+0x4af4e) (0x0033fec8) 7 0x7bc71db0 call_thread_func_wrapper+0xb() in ntdll (0x0033fed8) 8 0x7bc7486d call_thread_func+0x7c() in ntdll (0x0033ffa8) 9 0x7bc71d8e RtlRaiseException+0x21() in ntdll (0x0033ffc8) 10 0x7bc49f4e call_dll_entry_point+0x61d() in ntdll (0x0033ffe8) 0x7b839cf2: subl $4,%esp Modules: Module Address Debug info Name (130 modules) PE 340000- 37d000 Deferred ssleay32 PE 390000- 3ca000 Deferred webcoreviewer PE 3d0000- 3e0000 Deferred pthreadvc2 PE 400000- 1433000 Export kindle PE 1440000- 155c000 Deferred libeay32 PE 1560000- 169f000 Deferred qtscript4 PE 16a0000- 1795000 Deferred libxml2 PE 17a0000- 18c7000 Deferred javascriptcore PE 18d0000- 1974000 Deferred cflite PE 1980000- 2048000 Deferred libwebcore PE 2050000- 208d000 Deferred libjpeg PE 10000000-10a34000 Deferred qtwebkit4 PE 4a800000-4a8eb000 Deferred icuuc46 PE 4a900000-4aa36000 Deferred icuin46 PE 4ad00000-4bb80000 Deferred icudt46 PE 5a4c0000-5a4d4000 Deferred zlib1 PE 61000000-61056000 Deferred qtxml4 PE 62000000-62093000 Deferred qtsql4 PE 64000000-640ef000 Deferred qtnetwork4 PE 65000000-657b8000 Deferred qtgui4 PE 67000000-67228000 Deferred qtcore4 PE 78050000-780b9000 Deferred msvcp100 PE 78aa0000-78b5e000 Deferred msvcr100 ELF 7b800000-7ba15000 Dwarf kernel32 -PE 7b810000-7ba15000 \ kernel32 ELF 7bc00000-7bcc3000 Dwarf ntdll -PE 7bc10000-7bcc3000 \ ntdll ELF 7bf00000-7bf04000 Deferred ELF 7d7f7000-7d800000 Deferred librt.so.1 ELF 7d800000-7d818000 Deferred libresolv.so.2 ELF 7d818000-7d861000 Deferred libdbus-1.so.3 ELF 7d861000-7d873000 Deferred libp11-kit.so.0 ELF 7d873000-7d8f8000 Deferred libgcrypt.so.11 ELF 7d8f8000-7d90a000 Deferred libtasn1.so.3 ELF 7d90a000-7d913000 Deferred libkrb5support.so.0 ELF 7d913000-7d9e2000 Deferred libkrb5.so.3 ELF 7da42000-7da47000 Deferred libgpg-error.so.0 ELF 7da47000-7da6f000 Deferred libk5crypto.so.3 ELF 7da6f000-7da81000 Deferred libavahi-client.so.3 ELF 7da81000-7da8f000 Deferred libavahi-common.so.3 ELF 7da8f000-7db53000 Deferred libgnutls.so.26 ELF 7db53000-7db91000 Deferred libgssapi_krb5.so.2 ELF 7db91000-7dbe4000 Deferred libcups.so.2 ELF 7dc21000-7dc55000 Deferred uxtheme -PE 7dc30000-7dc55000 \ uxtheme ELF 7dc55000-7dc5b000 Deferred libxfixes.so.3 ELF 7dc5b000-7dc66000 Deferred libxcursor.so.1 ELF 7dc6a000-7dc6e000 Deferred libkeyutils.so.1 ELF 7dc6e000-7dc73000 Deferred libcom_err.so.2 ELF 7dca5000-7dccf000 Deferred libexpat.so.1 ELF 7dccf000-7dd03000 Deferred libfontconfig.so.1 ELF 7dd03000-7dd13000 Deferred libxi.so.6 ELF 7dd13000-7dd17000 Deferred libxcomposite.so.1 ELF 7dd17000-7dd20000 Deferred libxrandr.so.2 ELF 7dd20000-7dd2a000 Deferred libxrender.so.1 ELF 7dd2a000-7dd30000 Deferred libxxf86vm.so.1 ELF 7dd30000-7dd34000 Deferred libxinerama.so.1 ELF 7dd34000-7dd3b000 Deferred libxdmcp.so.6 ELF 7dd3b000-7dd5c000 Deferred libxcb.so.1 ELF 7dd5c000-7dd76000 Deferred libice.so.6 ELF 7dd76000-7deaa000 Deferred libx11.so.6 ELF 7deaa000-7debc000 Deferred libxext.so.6 ELF 7debc000-7dec5000 Deferred libsm.so.6 ELF 7ded4000-7df67000 Deferred winex11 -PE 7dee0000-7df67000 \ winex11 ELF 7df67000-7e001000 Deferred libfreetype.so.6 ELF 7e001000-7e023000 Deferred iphlpapi -PE 7e010000-7e023000 \ iphlpapi ELF 7e023000-7e03e000 Deferred wsock32 -PE 7e030000-7e03e000 \ wsock32 ELF 7e03e000-7e071000 Deferred wintrust -PE 7e040000-7e071000 \ wintrust ELF 7e071000-7e129000 Deferred crypt32 -PE 7e080000-7e129000 \ crypt32 ELF 7e129000-7e158000 Deferred msvcr90 -PE 7e130000-7e158000 \ msvcr90 ELF 7e158000-7e1e5000 Deferred msvcrt -PE 7e170000-7e1e5000 \ msvcrt ELF 7e1e5000-7e2ca000 Dwarf msvcp90 -PE 7e210000-7e2ca000 \ msvcp90 ELF 7e2ca000-7e2ec000 Deferred imm32 -PE 7e2d0000-7e2ec000 \ imm32 ELF 7e2ec000-7e3de000 Deferred oleaut32 -PE 7e300000-7e3de000 \ oleaut32 ELF 7e3de000-7e418000 Deferred winspool -PE 7e3f0000-7e418000 \ winspool ELF 7e418000-7e4f7000 Deferred comdlg32 -PE 7e420000-7e4f7000 \ comdlg32 ELF 7e4f7000-7e51f000 Deferred msacm32 -PE 7e500000-7e51f000 \ msacm32 ELF 7e51f000-7e5cc000 Deferred winmm -PE 7e530000-7e5cc000 \ winmm ELF 7e5cc000-7e641000 Deferred rpcrt4 -PE 7e5e0000-7e641000 \ rpcrt4 ELF 7e641000-7e749000 Deferred ole32 -PE 7e660000-7e749000 \ ole32 ELF 7e749000-7e841000 Deferred comctl32 -PE 7e750000-7e841000 \ comctl32 ELF 7e841000-7ea52000 Deferred shell32 -PE 7e850000-7ea52000 \ shell32 ELF 7ea52000-7eabc000 Deferred shlwapi -PE 7ea60000-7eabc000 \ shlwapi ELF 7eabc000-7ead5000 Deferred version -PE 7eac0000-7ead5000 \ version ELF 7ead5000-7eb35000 Deferred advapi32 -PE 7eae0000-7eb35000 \ advapi32 ELF 7eb35000-7ebf2000 Deferred gdi32 -PE 7eb40000-7ebf2000 \ gdi32 ELF 7ebf2000-7ed32000 Deferred user32 -PE 7ec00000-7ed32000 \ user32 ELF 7ed32000-7ed58000 Deferred mpr -PE 7ed40000-7ed58000 \ mpr ELF 7ed58000-7ed6e000 Deferred libz.so.1 ELF 7ed6e000-7eddd000 Deferred wininet -PE 7ed80000-7eddd000 \ wininet ELF 7eddd000-7ee0f000 Deferred ws2_32 -PE 7ede0000-7ee0f000 \ ws2_32 ELF 7ee0f000-7ee1c000 Deferred libnss_files.so.2 ELF 7ee1c000-7ee28000 Deferred libnss_nis.so.2 ELF 7ee28000-7ee42000 Deferred libnsl.so.1 ELF 7ee42000-7ee4b000 Deferred libnss_compat.so.2 ELF 7efd4000-7f000000 Deferred libm.so.6 ELF f74a3000-f74a7000 Deferred libxau.so.6 ELF f74a8000-f74ad000 Deferred libdl.so.2 ELF f74ad000-f7657000 Deferred libc.so.6 ELF f7658000-f7673000 Deferred libpthread.so.0 ELF f7675000-f767b000 Deferred libuuid.so.1 ELF f7682000-f77c4000 Dwarf libwine.so.1 ELF f77c6000-f77e8000 Deferred ld-linux.so.2 ELF f77e8000-f77e9000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 0000000e services.exe 0000001f 0 0000001e 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001c 0 00000019 0 00000014 0 00000013 0 0000001a plugplay.exe 00000020 0 0000001d 0 0000001b 0 00000037 explorer.exe 00000038 0 00000042 (D) C:\Program Files (x86)\Amazon\Kindle\Kindle.exe 00000043 0 <== System information: Wine build: wine-1.4 Platform: i386 (WOW64) Host system: Linux Host version: 3.8.11 How can this be fixed?

    Read the article

  • CommunicationException when shutting down JBoss 4.2.2

    - by Brian
    I have deployed an application using JBoss 4.2.2 on a 64-bit RHEL5 server. Since there are other JBoss servers, I had to change some port configurations so that there would be no conflicts when starting the server. So right now I'm using ports-01 from the sample-bindings.xml file that came in the docs/examples/binding-manager/samples directory. In addition, below is a list of all the files I've edited to reflect the new ports: JBOSS_HOME/servers/default/deploy/jboss-web.deployer/server.xml: Changed Connector port - 8080 to 8180 Changed AJP 1.3 Connector port - 8009 to 8109 JBOSS_HOME/server/default/deploy/jbossws.beans/META-INF/jboss-beans.xml Changed 8080 to 8180 JBOSS_HOME/server/default/conf/jboss-service.xml: Changed 8083 to 8183 Changed 1099 to 1299 Changed 1098 to 1298 Changed 4444 to 4644 Changed 4445 to 4645 Changed 4446 to 4646 Changed 4447 to 4647 JBOSS_HOME/server/default/conf/jboss-minimal.xml: Changed 1099 to 1299 Changed 1098 to 1298 When I start the server (binding to localhost) everything is fine and I'm able to access the application. But when I try to shutdown the server I get the following error: Exception in thread "main" javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost [Root exception is javax.naming.CommunicationException : Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1562) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:634) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:627) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.jboss.Shutdown.main(Shutdown.java:214) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:274) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1533) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:248) ... 5 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.(Socket.java:372) at java.net.Socket.(Socket.java:273) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:84) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:77) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:244) ... 5 more Is there any other file that I need to change the 1099 to 1299, or am I missing some other step?

    Read the article

  • Is Steam for Mac effectively running as superuser?

    - by godDLL
    When you download the client it does not weigh too much, and seems to do very little. Inside the app bundle there is a script that—upon inspecting the environment and deciding you're not running Linux—launches the client, which downloads the full support environment and resources. For this to happen (all of this is saved inside the bundle, the app bundle gets updated in this process) Steam wants Universal Access for Assistive Devices, and your password. Cacheable resources, preferences (like keyboard shortcuts), support files (like game hardware requirement lookup tables) live inside the bundle, not in ~/Library/{Application Support|Preferences|Cache}; games' data get dumped into ~/Documents/Steam Content. I'd describe myself as a bit OCD (which really says a lot), and I wouldn't care that much still. I'd go comb this hairy mess and find out where stuff is, when and if I need to, even if it's in an unfamiliar place; that does not actually tick me off. Well, a little bit. What makes me concerned is the way Steam needs both Access for Assistive Devices, and my password to run. The former gives it the ability to talk very intimately with running apps and the underlying system; while the latter (admin account) could very well give it and it's publishers unrestricted access to all my software, hardware and data. With publishers like Rockstar using scene NOCD cracks to publish their games on Steam, I'm not so sure I'm OK with this. I'd like more games made available for the MacOS X and all the pretty machines that run it, but this arrangement does not seem very Mac-like to me. It looks like Valve is going around system security measures and best practices, foregoing sandboxing, code signing, relatively sane structured organization; all the things that would appeal to someone who's no fun at parties at all, and will die alone, in his long dead mother's basement… wait. Right. Anyway. Can we get some input on Steam for Mac security at the end-user machine, from someone who understands how Accessibility API works, whether games distributed on Steam can read and write outside the user homefolder, collect data from other running apps, or similar?

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • Starting php-cgi at boot on mac os x 10.6.8

    - by nikhil
    I'm new to mac os, I have installed and configured nginx with php-fastcgi. I need to run this command in a terminal and keep that terminal open to access php files from my browser. php-cgi -b 127.0.0.1:9000 -q Here's the plist that I wrote by looking up sources on the internet <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Debug</key> <false/> <key>EnvironmentVariables</key> <dict> <key>PHP_FCGI_CHILDREN</key> <string>2</string> <key>PHP_FCGI_MAX_REQUESTS</key> <string>1000</string> </dict> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> <key>UserName</key> <string>nikhil</string> <key>Label</key> <string>php-fastcgi</string> <key>ProgramArguments</key> <array> <string>/usr/bin/php-cgi</string> <string>-b 127.0.0.1:9000</string> <string>-q</string> </array> </dict> </plist> I'm loading it using launchctl load -w ~/Library/LaunchAgents/php-fastcgi.plist without any success, can anyone tell me how this can be done.

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Silverlight Cream Monday WP7 App Review # 1

    - by Dave Campbell
    I'm going to try something here... if it seems useful, I'll continue, if it doesn't, I'll stop... so give me feedback! There are *lots* of Apps in the WP7 Marketplace, and heaven help me, but the Marketplace sucks for finding stuff. I won't rehash what's already been said in the blogs, but I agree with one and all. I went out last Saturday to find 2 apps that I knew were released, and couldn't do so on my device. Even in the Zune app, it took quite a while to find them... ok, I'll back off a bit, because I just found out I can do 'Search' now if I know the name... I didn't think that was working before. So my thought is on Monday (like today), I will post a review of 5 apps/games I either use or have played with on my device. These are strictly my opinions, you understand, but hey... it's better than a poke in the eye with an iPhone! A few disclaimers:     Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week, and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. In this Issue:   Jingo! is the first app I bought, just to see what the experience was like. It's very much like a game we used to play in school in the Army in 1971 on paper we passed around. Sort of a cross between hangman and Mastermind, you try to figure out the hidden word in 5 tries. You get really good at 5-letter words after a while. I like this because you have to think, and you're not pressured by a clock Jingo! is by James Furdell and is $1.99 I reviewed René Schulte's Pictures Lab a while back, and have not changed my mind. This is an excellent app for playing with any photo on your device... one you've just taken, one you've synced from your PC, or one you've saved from email. I like this because you can get some cool effects for your photos, and it just works. Pictures Lab is by Schulte Software Development and is $1.99 Since I work as a consultant, and from home, I wanted something I could track my time with. I've test-driven all the contenders I could find so far on the phone, and so far I like ONTRACK! the best. If asked, I have some suggestions, but it's probably just the way I work or think. What I do like is I can tap a project to start/stop/restart a counter, and at the end of the day it shows me how much time I've been working. If there's a way to make an adjustment in case you forget to tap the counter, I don't know how to do it, and that's my biggest complaint. I like this because you can get a daily readout which you can also email as a spreadsheet. The daily results display is very good. ONTRACK! is by Qmino and is $2.99 Remember Item 4 above... I've been playing guitar for 48 years... obviously since before the invention of 'tuners', so I'm not as dependent upon these as some folks are. I've tried some in the past and have always felt I can do just as good by ear (I have perfect relative pitch). So, I gave this app by András Velvárt a dance just to see how it works, and it is surprisingly good. If you're used to one of the stage tuners this may take a little getting used to, but it does the job. The difference with this one is there is no real 'null' point inside which you can think your guitar is in tune. The soundwave stays visible on the device, and if it's moving to the right, your string is flat, if it's moving to the left, your string is sharp. Getting it exact might be tricky, but it is exact! If you need to rely on a tuner, this is a good choice in my opinion, exactly because of the sensitivity.. tune up with this and you're dead-on. Guitar Tuner is by Kinabalu Innovation Limited and is $0.99 Popper 2 is the WP7 version of a wildly popular game by Bill Reiss named Dr. Popper. You can get a trial, or you can now get a free lite version of the game. Popper 2 is a fast-paced bubble breaker game. I find it something fun to play when I just want to buzz out, but maybe the best review is that my daughter didn't want to give my phone back when I showed it to her, and always wants to grab my phone to play 'that game'. A fun distraction with great graphics and a great price Popper 2 is by Blue Rose Systems, LLC and is $1.29 Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light!   Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • [XSL-FO] Characters from other than English languages

    - by Lukasz Kurylo
    My client have departments in Europe Central and East, so there is highly possibility that in the generated pdfs there will be at least in the people names and/or surnames some specific characters for the country language.   With the XSL-FO we can use some out-of-the box fonts, e.g. the default is Times. We can change it for specific block of text or the entire document to other like Helvetica or Arial. All will be good to the moment that we use only an english alphabet. If we want to add e.g. some characters from polish or bulgarian language, in the *.fo file:         <fo:block >                 <fo:inline font-weight="bold">english: </fo:inline>                 <fo:inline font-weight="bold">yellow</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">polish: </fo:inline>                 <fo:inline font-weight="bold">zólty</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">russian: </fo:inline>                 <fo:inline font-weight="bold">??????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">bulgarian: </fo:inline>                 <fo:inline font-weight="bold">????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">english: </fo:inline>                 <fo:inline font-weight="bold">yellow</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">polish: </fo:inline>                 <fo:inline font-weight="bold"  font-family="Arial">zólty</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">russian: </fo:inline>                 <fo:inline font-weight="bold" font-family="Arial">??????</fo:inline>       </fo:block>       <fo:block>                 <fo:inline font-weight="bold">bulgarian: </fo:inline>                 <fo:inline font-weight="bold" font-family="Arial">????</fo:inline>       </fo:block>   The result can be diffrent from the expected depending on the selected font, e.g:                 As you can see Timer nor Arial work in this case.   The problem here is not related to XSL-FO, but rather to the renderer we are using. I have lost a lot of time to find a solution for the using by me XSL-FO –> PDF rendered to acquire these characters in my generated files. Fortunatelly all what have to be done it is to embed the font (or part of it) in the file(s) during rendering.   The renderer that I’m using it is an open source FO.NET.   For this one, the code to generate a pdf file looks that:   var fonet =  Fonet.FonetDriver.Make(); fonet.Render("source.fo", "result.pdf");   To emded the font in the pdf, we need to set the appropriate option to the driver:   fonet.Options = new Fonet.Render.Pdf.PdfRendererOptions() {       FontType = Fonet.Render.Pdf.FontType.Embed }; Right now, the pdf we get should look like this:               As you can see, the result for the Arial font looks exactly how it should, because this font has a characters included not only for the english language like the default Times, which we shouls avoid if we not generating a english-only documents.   This is worth to notice that in this situation the generated pdf file is quite large, it has more than 400 kb in size. This is of course because of embedding the entire font in it to make the document portable to systems, where the used font is not present. Instead on embedding the entire font, we can only embed the subset of used characters by changing the options to:   fonet.Options = new Fonet.Render.Pdf.PdfRendererOptions() {       FontType = Fonet.Render.Pdf.FontType.Subset };   Right now, this specific pdf is only 12 kb in size.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • "Error loading operating system": Win7/Vista

    - by LookitsPuck
    Hey fellas, Have this computer for about 2 years now. Originally had Vista installed, now have Windows 7 installed. Both on separate hard drives. Also have another drive used strictly for media. About a week ago, the Vista hard drive started going on its way out. Was getting problems on startup. After a few BIOS settings, I was able to get into Windows 7 and everything was fine. However, I started remembering the startup issues, so I deleted the bootup for Vista under msconfig. Didn't restart the computer at that time, though. For a few days, everything was ok. Last night I play a little poker, then hit the hay. I wake up to a good ole "Error loading operating system" on the screen. Just wonderful. Looks like the computer restarted overnight (auto updates, anyone?). So, after a big of finagling and half hearted tries, I can't get past the "Error loading operating system" screen. FWIW, in the BIOS it can see my hard drives fine. So I move on. I get my Windows 7 installation disk to try and do a repair. Go in the BIOS, change boot priority to DVD drive, and we're on our merry way. After loading from the disc, I first try jumping into the "Repair your computer" section. That opens up the System Recovery Options. However, this is where the problem comes into play. I don't see any operating systems here. Nada. What's odd though is if I click on the Load Drivers button, I can see my Windows 7 partition (C:), and can go through the files and folders without issue. What do I do at this point? I can't repair it. It seems like I can traverse the hard drive without issue when in an open dialog in the System Recovery Options, but I'm getting the good ole "Error loading computer" on bootup. Suggestions? Thanks all!!

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • How do I mount my External HDD with filesystem type errors?

    - by Snuggie
    I am a relatively new Ubuntu user and I am having some difficulty mounting my external 2TB HDD. When I first installed Linux my external HDD was working just fine, however, it has stopped working and I have a lot of important files on there that I need. Before my HDD would automatically mount and no worries. Now, however, it doesn't automatically mount and when I try to manually mount it I keep running into filesystem type errors that I can't seem to get past. Below are images that depict my step by step process of how I am trying to mount my HDD along with the errors I am receiving. If anybody has any idea what I am doing wrong or how to correct the issue I would greatly appreciate it. Step 1) Ensure the computer recognizes my external HDD. pj@PJ:~$ dmesg ... [ 5790.367910] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1022 PQ: 0 ANSI: 6 [ 5790.368278] scsi 7:0:0:1: Enclosure WD SES Device 1022 PQ: 0 ANSI: 6 [ 5790.370122] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 5790.370310] ses 7:0:0:1: Attached Enclosure device [ 5790.370462] ses 7:0:0:1: Attached scsi generic sg3 type 13 [ 5792.971601] sd 7:0:0:0: [sdb] 3906963456 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 5792.972148] sd 7:0:0:0: [sdb] Write Protect is off [ 5792.972162] sd 7:0:0:0: [sdb] Mode Sense: 47 00 10 08 [ 5792.972591] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.972605] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.975235] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.975249] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.987504] sdb: sdb1 [ 5792.988900] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.988911] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.988920] sd 7:0:0:0: [sdb] Attached SCSI disk Step 2) Check if it mounted properly (it does not) pj@PJ:~$ df -ah Filesystem Size Used Avail Use% Mounted on /dev/sda1 682G 3.9G 644G 1% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 2.9G 4.0K 2.9G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.2G 928K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 156K 2.9G 1% /run/shm gvfs-fuse-daemon 0 0 0 - /home/pj/.gvfs Step 3) Try mounting manually using NTFS and VFAT (both as SDB and SDB1) pj@PJ:~$ sudo mount /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t ntfs /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so pj@PJ:~$ sudo mount -t ntfs /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb1 /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

< Previous Page | 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556  | Next Page >