Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 838/3258 | < Previous Page | 834 835 836 837 838 839 840 841 842 843 844 845  | Next Page >

  • javascript|jquery|ajax|etc.. How to download/read only first 80KB of file.

    - by DeusAphor
    I am making a grease-monkey plugin for a website that has many flash files. I'd like to make a hash of the flash, the problem is that the flash files are up to 10(total) * 10MEGS. This is slow; I'd like to be able to only grab the first 80KB to hash. The end result would be an easy way to blacklist certain flash files containing unwanted content. Is this possible? Suggestions? Code examples are greatly appreciated!

    Read the article

  • How to get values of atributes on a XML file using C++ ?

    - by Reversed
    Need to write some C++ code that reads XML string and if i do something like: get valueofElement("ACTION_ON_CARD") it returns 3 get valueofElement("ACTION_ON_ENVELOPE") it returns YES XML String: <ACTION_ON_CARD>3</ACTION_ON_CARD> <ACTION_ON_ENVELOPE>YES</ACTION_ON_ENVELOPE> Any code example would be helpfull Thanks

    Read the article

  • Too many connections to 212.192.255.240

    - by Castor
    Recently, my Internet slowed down drastically. I downloaded a tool to see the TCP/IP connections from my Vista computer. I found out that a lot TCP/IP connections are being connected to 212.192.255.240 through SVCHost. It seems that it is trying to connect to different ports. I think that my computer is being infected with some kind of malware etc. But I am not sure how to get rid of it. I did a little bit of research on this IP but found nothing. Any suggestions are highly appreciated. UPDATE: This is the HiJackThis log file and I can't find any thing weird. Also, the program is also trying to create connections to 91.205.127.63, which is also from Russia. Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 18:20:54 PM, on 4/29/2010 Platform: Windows Vista SP2 (WinNT 6.00.1906) MSIE: Internet Explorer v8.00 (8.00.6001.18882) Boot mode: Normal Running processes: C:\Windows\SYSTEM32\taskeng.exe C:\Windows\system32\Dwm.exe C:\Windows\SYSTEM32\Taskmgr.exe C:\Windows\explorer.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Alwil Software\Avast4\ashDisp.exe C:\Program Files\Software602\Print2PDF\Print2PDF.exe C:\Windows\system32\igfxsrvc.exe C:\Program Files\VertrigoServ\Vertrigo.exe C:\Program Files\Java\jre6\bin\jusched.exe C:\Program Files\Google\GoogleToolbarNotifier\GoogleToolbarNotifier.exe C:\Windows\system32\wbem\unsecapp.exe C:\Program Files\Windows Media Player\wmpnscfg.exe C:\Program Files\X-NetStat Professional\xns5.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Program Files\HP\Digital Imaging\smart web printing\hpswp_clipbook.exe C:\Windows\system32\cmd.exe C:\Program Files\Trend Micro\HiJackThis\HiJackThis.exe R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = about:blank R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyServer = 10.0.0.30:8118 R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: Yahoo! Toolbar - {EF99BD32-C1FB-11D2-892F-0090271D4F88} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll F2 - REG:system.ini: Shell=explorer.exe rundll32.exe O2 - BHO: &Yahoo! Toolbar Helper - {02478D38-C3F9-4efb-9B51-7695ECA05670} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll O2 - BHO: HP Print Enhancer - {0347C33E-8762-4905-BF09-768834316C61} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_printenhancer.dll O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: RoboForm BHO - {724d43a9-0d85-11d4-9908-00400523e39a} - C:\Program Files\Siber Systems\AI RoboForm\roboform.dll O2 - BHO: Groove GFS Browser Helper - {72853161-30C5-4D22-B7F9-0BBC1D38A37E} - C:\PROGRA~1\MICROS~3\Office12\GRA8E1~1.DLL O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - C:\Program Files\Google\Google Toolbar\GoogleToolbar_32.dll O2 - BHO: Google Toolbar Notifier BHO - {AF69DE43-7D58-4638-B6FA-CE66B5AD205D} - C:\Program Files\Google\GoogleToolbarNotifier\5.5.4723.1820\swg.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files\Java\jre6\bin\jp2ssv.dll O2 - BHO: Google Gears Helper - {E0FEFE40-FBF9-42AE-BA58-794CA7E3FB53} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O2 - BHO: SingleInstance Class - {FDAD4DA1-61A2-4FD8-9C17-86F7AC245081} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\YTSingleInstance.dll O2 - BHO: HP Smart BHO Class - {FFFFFFFF-CF4E-4F2B-BDC2-0E72E116A856} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_BHO.dll O3 - Toolbar: Yahoo! Toolbar - {EF99BD32-C1FB-11D2-892F-0090271D4F88} - C:\Program Files\Yahoo!\Companion\Installs\cpn0\yt.dll O3 - Toolbar: &RoboForm - {724d43a0-0d85-11d4-9908-00400523e39a} - C:\Program Files\Siber Systems\AI RoboForm\roboform.dll O3 - Toolbar: Google Toolbar - {2318C2B1-4965-11d4-9B18-009027A5CD4F} - C:\Program Files\Google\Google Toolbar\GoogleToolbar_32.dll O4 - HKLM\..\Run: [RtHDVCpl] C:\Program Files\Realtek\Audio\HDA\RtHDVCpl.exe -s O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [avast!] C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe O4 - HKLM\..\Run: [Print2PDF Print Monitor] "C:\Program Files\Software602\Print2PDF\Print2PDF.exe" /server O4 - HKLM\..\Run: [VertrigoServ] "C:\Program Files\VertrigoServ\Vertrigo.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files\Java\jre6\bin\jusched.exe" O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 9.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [Google Quick Search Box] "C:\Program Files\Google\Quick Search Box\GoogleQuickSearchBox.exe" /autorun O4 - HKLM\..\Run: [iTunesHelper] "C:\Program Files\iTunes\iTunesHelper.exe" O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files\QuickTime\QTTask.exe" -atboottime O4 - HKCU\..\Run: [swg] "C:\Program Files\Google\GoogleToolbarNotifier\GoogleToolbarNotifier.exe" O4 - HKCU\..\Run: [CCProxy] C:\CCProxy\CCProxy.exe O4 - HKCU\..\Run: [AlcoholAutomount] "C:\Program Files\Alcohol Soft\Alcohol 120\axcmd.exe" /automount O4 - HKCU\..\Run: [RoboForm] "C:\Program Files\Siber Systems\AI RoboForm\RoboTaskBarIcon.exe" O4 - HKCU\..\Run: [FileHippo.com] "C:\Program Files\filehippo.com\UpdateChecker.exe" /background O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /detectMem (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\Run: [WindowsWelcomeCenter] rundll32.exe oobefldr.dll,ShowWelcomeCenter (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /detectMem (User 'NETWORK SERVICE') O4 - Startup: AutorunsDisabled O4 - Startup: Locate32 Autorun.lnk = C:\Program Files\Locate\Locate32.exe O4 - Startup: OneNote Table Of Contents.onetoc2 O8 - Extra context menu item: Add to Google Photos Screensa&ver - res://C:\Windows\system32\GPhotos.scr/200 O8 - Extra context menu item: Customize Menu - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComCustomizeIEMenu.html O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~3\Office14\EXCEL.EXE/3000 O8 - Extra context menu item: Fill Forms - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O8 - Extra context menu item: Google Sidewiki... - res://C:\Program Files\Google\Google Toolbar\Component\GoogleToolbarDynamic_mui_en_96D6FF0C6D236BF8.dll/cmsidewiki.html O8 - Extra context menu item: RoboForm Toolbar - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O8 - Extra context menu item: S&end to OneNote - res://C:\PROGRA~1\MICROS~3\Office14\ONBttnIE.dll/105 O8 - Extra context menu item: Save Forms - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra button: (no name) - {09C04DA7-5B76-4EBC-BBEE-B25EAC5965F5} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O9 - Extra 'Tools' menuitem: &Gears Settings - {09C04DA7-5B76-4EBC-BBEE-B25EAC5965F5} - C:\Program Files\Google\Google Gears\Internet Explorer\0.5.36.0\gears.dll O9 - Extra button: Send to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\PROGRA~1\MICROS~3\Office12\ONBttnIE.dll O9 - Extra 'Tools' menuitem: S&end to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\PROGRA~1\MICROS~3\Office12\ONBttnIE.dll O9 - Extra button: Fill Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F46} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O9 - Extra 'Tools' menuitem: Fill Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F46} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComFillForms.html O9 - Extra button: Save - {320AF880-6646-11D3-ABEE-C5DBF3571F49} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra 'Tools' menuitem: Save Forms - {320AF880-6646-11D3-ABEE-C5DBF3571F49} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComSavePass.html O9 - Extra button: Print2PDF - {5B7027AD-AA6D-40df-8F56-9560F277D2A5} - C:\Program Files\Software602\Print2PDF\Print602.dll O9 - Extra 'Tools' menuitem: Print2PDF - {5B7027AD-AA6D-40df-8F56-9560F277D2A5} - C:\Program Files\Software602\Print2PDF\Print602.dll O9 - Extra button: RoboForm - {724d43aa-0d85-11d4-9908-00400523e39a} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O9 - Extra 'Tools' menuitem: RoboForm Toolbar - {724d43aa-0d85-11d4-9908-00400523e39a} - file://C:\Program Files\Siber Systems\AI RoboForm\RoboFormComShowToolbar.html O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~3\Office12\REFIEBAR.DLL O9 - Extra button: Show or hide HP Smart Web Printing - {DDE87865-83C5-48c4-8357-2F5B1AA84522} - C:\Program Files\HP\Digital Imaging\Smart Web Printing\hpswp_BHO.dll O17 - HKLM\System\CCS\Services\Tcpip\..\{A80AB385-7767-4B5C-AF97-DBD65B29D8D1}: NameServer = 218.248.255.146 218.248.255.212 O17 - HKLM\System\CCS\Services\Tcpip\..\{D10402C1-9CDE-4582-A6B7-6C0D33B0E7BC}: NameServer = 218.248.255.146,218.248.255.212 O18 - Protocol: grooveLocalGWS - {88FED34C-F0CA-4636-A375-3CB6248B04CD} - C:\PROGRA~1\MICROS~3\Office12\GR99D3~1.DLL O22 - SharedTaskScheduler: Component Categories cache daemon - {8C7461EF-2B13-11d2-BE35-3078302C2030} - C:\Windows\system32\browseui.dll O23 - Service: Apple Mobile Device - Apple Inc. - C:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - ALWIL Software - C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe O23 - Service: avast! Antivirus - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashServ.exe O23 - Service: avast! Mail Scanner - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe O23 - Service: avast! Web Scanner - ALWIL Software - C:\Program Files\Alwil Software\Avast4\ashWebSv.exe O23 - Service: Bonjour Service - Apple Inc. - C:\Program Files\Bonjour\mDNSResponder.exe O23 - Service: CCProxy - Youngzsoft - C:\CCProxy\CCProxy.exe O23 - Service: Google Update Service (gupdate1c9c328490dac0) (gupdate1c9c328490dac0) - Google Inc. - C:\Program Files\Google\Update\GoogleUpdate.exe O23 - Service: Google Software Updater (gusvc) - Google - C:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exe O23 - Service: Distributed Transaction Coordinator MSDTCwercplsupport (MSDTCwercplsupport) - Unknown owner - C:\Windows\system32\acluiz.exe O23 - Service: Realtek Audio Service (RtkAudioService) - Realtek Semiconductor - C:\Windows\RtkAudioService.exe O23 - Service: StarWind AE Service (StarWindServiceAE) - Rocket Division Software - C:\Program Files\Alcohol Soft\Alcohol 120\StarWind\StarWindServiceAE.exe O23 - Service: SuperProServer - Unknown owner - C:\Windows\spnsrvnt.exe (file missing) O23 - Service: Vertrigo_Apache - Apache Software Foundation - C:\Program Files\VertrigoServ\apache\bin\v_apache.exe O23 - Service: Vertrigo_MySQL - Unknown owner - C:\Program Files\VertrigoServ\mysql\bin\v_mysqld.exe -- End of file - 10965 bytes enter code here enter code here

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • ls -l freezes terminal locally and remotely

    - by Jakobud
    I've been reading other SF threads regarding ls not returning results or freezing and stalling terminal sessions and it appears they usually the fault of network problems. My problem however, occurs both over remote SSH sessions but also if I am physically at the server itself... I just installed CentOS 5.4 on one of our servers. I'm setting up some rdiff-backup scripts and when I downloaded librsync and untared it, thats when I started seeing some weird behavior with ls -l. wget http://sourceforge.net/projects/librsync/files/librsync/0.9.7/librsync-0.9.7.tar.gz/download /tmp cd /tmp tar -xzf librsync-0.9.7.tar.gz Simple enough. To view the files in this directory I did this: ls results: librsync-0.9.7 librsync-0.9.7.tar.gz Now, if I ls -l, my terminal freezes. I have to re-ssh in to keep going. After reading SF threads, I thought it was network related. So I was extremely surprised to go sit down at the server itself and see the exact same thing happen... So its obviously not a network issues. Even if I ls /tmp/librsync-0.9.7, my terminal freezes just the same... Next I did an strace and got this (warning: wall of text coming....): strace ls -l /tmp execve("/bin/ls", ["ls", "-l", "/tmp"], [/* 21 vars */]) = 0 brk(0) = 0x1c521000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc0000 uname({sys="Linux", node="massive.answeron.com", ...}) = 0 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b8582cc1000 close(3) = 0 open("/lib64/librt.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \"\200\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=53448, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd3000 mmap(0x30bb800000, 2132936, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bb800000 mprotect(0x30bb807000, 2097152, PROT_NONE) = 0 mmap(0x30bba07000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x30bba07000 close(3) = 0 open("/lib64/libacl.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\31@\2740\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=28008, ...}) = 0 mmap(0x30bc400000, 2120992, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bc400000 mprotect(0x30bc406000, 2093056, PROT_NONE) = 0 mmap(0x30bc605000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x30bc605000 close(3) = 0 open("/lib64/libselinux.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`E\300\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=95464, ...}) = 0 mmap(0x30bbc00000, 2192784, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bbc00000 mprotect(0x30bbc15000, 2097152, PROT_NONE) = 0 mmap(0x30bbe15000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x30bbe15000 mmap(0x30bbe17000, 1424, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bbe17000 close(3) = 0 open("/lib64/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220\332\201\2720\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1717800, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd4000 mmap(0x30ba800000, 3498328, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30ba800000 mprotect(0x30ba94d000, 2097152, PROT_NONE) = 0 mmap(0x30bab4d000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x14d000) = 0x30bab4d000 mmap(0x30bab52000, 16728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bab52000 close(3) = 0 open("/lib64/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\220W\0\2730\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=145824, ...}) = 0 mmap(0x30bb000000, 2204528, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bb000000 mprotect(0x30bb016000, 2093056, PROT_NONE) = 0 mmap(0x30bb215000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x30bb215000 mmap(0x30bb217000, 13168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bb217000 close(3) = 0 open("/lib64/libattr.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\17\300\2750\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=17888, ...}) = 0 mmap(0x30bdc00000, 2110728, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bdc00000 mprotect(0x30bdc04000, 2093056, PROT_NONE) = 0 mmap(0x30bde03000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x30bde03000 close(3) = 0 open("/lib64/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\16\300\2720\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=23360, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd5000 mmap(0x30bac00000, 2109696, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bac00000 mprotect(0x30bac02000, 2097152, PROT_NONE) = 0 mmap(0x30bae02000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x30bae02000 close(3) = 0 open("/lib64/libsepol.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0=\0\2740\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=247496, ...}) = 0 mmap(0x30bc000000, 2383136, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x30bc000000 mprotect(0x30bc03b000, 2097152, PROT_NONE) = 0 mmap(0x30bc23b000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3b000) = 0x30bc23b000 mmap(0x30bc23c000, 40224, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30bc23c000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd6000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cd7000 arch_prctl(ARCH_SET_FS, 0x2b8582cd6c50) = 0 mprotect(0x30bba07000, 4096, PROT_READ) = 0 mprotect(0x30bab4d000, 16384, PROT_READ) = 0 mprotect(0x30bb215000, 4096, PROT_READ) = 0 mprotect(0x30ba61b000, 4096, PROT_READ) = 0 mprotect(0x30bae02000, 4096, PROT_READ) = 0 munmap(0x2b8582cc1000, 71746) = 0 set_tid_address(0x2b8582cd6ce0) = 24102 set_robust_list(0x2b8582cd6cf0, 0x18) = 0 futex(0x7fff72d02d6c, FUTEX_WAKE_PRIVATE, 1) = 0 rt_sigaction(SIGRTMIN, {0x30bb005370, [], SA_RESTORER|SA_SIGINFO, 0x30bb00e7c0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x30bb0052b0, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x30bb00e7c0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=RLIM_INFINITY}) = 0 access("/etc/selinux/", F_OK) = 0 brk(0) = 0x1c521000 brk(0x1c542000) = 0x1c542000 open("/etc/selinux/config", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=448, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc1000 read(3, "# This file controls the state o"..., 4096) = 448 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b8582cc1000, 4096) = 0 open("/proc/mounts", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b8582cc1000 read(3, "rootfs / rootfs rw 0 0\n/dev/root"..., 4096) = 577 close(3) = 0 munmap(0x2b8582cc1000, 4096) = 0 open("/selinux/mls", O_RDONLY) = 3 read(3, "1", 19) = 1 close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_FILE, path="/var/run/setrans/.setrans-unix"...}, 110) = 0 sendmsg(3, {msg_name(0)=NULL, msg_iov(5)=[{"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\0", 1}, {"\0", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 14 readv(3, [{"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\0\0\0\0", 4}], 3) = 12 readv(3, [{"\0", 1}], 1) = 1 close(3) = 0 open("/usr/lib/locale/locale-archive", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=56413824, ...}) = 0 mmap(NULL, 56413824, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b8582cd8000 close(3) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=65, ws_col=137, ws_xpixel=0, ws_ypixel=0}) = 0 open("/usr/share/locale/locale.alias", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=2528, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "# Locale name alias data base.\n#"..., 4096) = 2528 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/usr/share/locale/en_US.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_US/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) lstat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0 getxattr("/tmp", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1711, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1711 read(3, "", 4096) = 0 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b85862a5000 close(3) = 0 open("/lib64/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\37\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=53880, ...}) = 0 mmap(NULL, 2139432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x2b85862b7000 mprotect(0x2b85862c1000, 2093056, PROT_NONE) = 0 mmap(0x2b85864c0000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x2b85864c0000 close(3) = 0 mprotect(0x2b85864c0000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fstat(3, {st_mode=S_IFREG|0644, st_size=1823, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1823 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/group", O_RDONLY) = 3 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fstat(3, {st_mode=S_IFREG|0644, st_size=743, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(3, "root:x:0:root\nbin:x:1:root,bin,d"..., 4096) = 743 close(3) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/tmp", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = 3 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 getdents(3, /* 8 entries */, 32768) = 264 lstat("/tmp/librsync-0.9.7.tar.gz", {st_mode=S_IFREG|0644, st_size=453802, ...}) = 0 getxattr("/tmp/librsync-0.9.7.tar.gz", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp/librsync-0.9.7.tar.gz", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) lstat("/tmp/librsync-0.9.7", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0 getxattr("/tmp/librsync-0.9.7", "system.posix_acl_access", 0x0, 0) = -1 ENODATA (No data available) getxattr("/tmp/librsync-0.9.7", "system.posix_acl_default", 0x0, 0) = -1 ENODATA (No data available) open("/etc/passwd", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1823, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1823 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b85862a5000 close(4) = 0 open("/lib64/libnss_ldap.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300r\4\0\0\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=3169960, ...}) = 0 mmap(NULL, 5329912, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x2b85864c2000 mprotect(0x2b858679e000, 2093056, PROT_NONE) = 0 mmap(0x2b858699d000, 176128, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x2db000) = 0x2b858699d000 mmap(0x2b85869c8000, 62456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x2b85869c8000 close(4) = 0 open("/lib64/libcom_err.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\n\300\2770\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=10000, ...}) = 0 mmap(0x30bfc00000, 2103048, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30bfc00000 mprotect(0x30bfc02000, 2093056, PROT_NONE) = 0 mmap(0x30bfe01000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x1000) = 0x30bfe01000 close(4) = 0 open("/lib64/libkeyutils.so.1", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\n@\2760\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=9472, ...}) = 0 mmap(0x30be400000, 2102416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30be400000 mprotect(0x30be402000, 2093056, PROT_NONE) = 0 mmap(0x30be601000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x1000) = 0x30be601000 close(4) = 0 open("/lib64/libresolv.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\2402\0\2760\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=92736, ...}) = 0 mmap(0x30be000000, 2181864, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x30be000000 mprotect(0x30be011000, 2097152, PROT_NONE) = 0 mmap(0x30be211000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x11000) = 0x30be211000 mmap(0x30be213000, 6888, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x30be213000 close(4) = 0 mprotect(0x30be211000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 rt_sigaction(SIGPIPE, {0x1, [], SA_RESTORER, 0x30ba8302d0}, {SIG_DFL, [], 0}, 8) = 0 geteuid() = 0 futex(0x2b85869c7708, FUTEX_WAKE_PRIVATE, 2147483647) = 0 open("/etc/ldap.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# @(#)$Id: ldap.conf,v 1.38 2006"..., 4096) = 4096 read(4, "Use the OpenLDAP password change"..., 4096) = 4096 read(4, " OpenLDAP 2.0 and earlier is \"no"..., 4096) = 927 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 uname({sys="Linux", node="massive.answeron.com", ...}) = 0 open("/etc/resolv.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=107, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "; generated by /sbin/dhclient-sc"..., 4096) = 107 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 4 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(4) = 0 open("/etc/host.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=17, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "order hosts,bind\n", 4096) = 17 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 futex(0x30bab54d44, FUTEX_WAKE_PRIVATE, 2147483647) = 0 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=71746, ...}) = 0 mmap(NULL, 71746, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b85862a5000 close(4) = 0 open("/lib64/libnss_dns.so.2", O_RDONLY) = 4 read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\17\0\0\0\0\0\0"..., 832) = 832 fstat(4, {st_mode=S_IFREG|0755, st_size=23736, ...}) = 0 mmap(NULL, 2113792, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x2b85869d8000 mprotect(0x2b85869dc000, 2093056, PROT_NONE) = 0 mmap(0x2b8586bdb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x3000) = 0x2b8586bdb000 close(4) = 0 mprotect(0x2b8586bdb000, 4096, PROT_READ) = 0 munmap(0x2b85862a5000, 71746) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 823870}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "C\v\1\0\0\1\0\0\0\0\0\0\7massive\10answeron\3co"..., 38, MSG_NOSIGNAL, NULL, 0) = 38 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [122]) = 0 recvfrom(4, "C\v\205\200\0\1\0\1\0\2\0\2\7massive\10answeron\3co"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 122 close(4) = 0 open("/etc/openldap/ldap.conf", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=335, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "#\n# LDAP Defaults\n#\n\n# See ldap."..., 4096) = 335 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 getuid() = 0 geteuid() = 0 getgid() = 0 getegid() = 0 open("/root/ldaprc", O_RDONLY) = -1 ENOENT (No such file or directory) open("/root/.ldaprc", O_RDONLY) = -1 ENOENT (No such file or directory) stat("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=9119, ...}) = 0 geteuid() = 0 brk(0x1c566000) = 0x1c566000 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 open("/etc/hosts", O_RDONLY) = 4 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=187, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b85862a5000 read(4, "# Do not remove the following li"..., 4096) = 187 read(4, "", 4096) = 0 close(4) = 0 munmap(0x2b85862a5000, 4096) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 855948}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "\32 \1\0\0\1\0\0\0\0\0\0\4ldap\10answeron\3com\0\0"..., 35, MSG_NOSIGNAL, NULL, 0) = 35 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [104]) = 0 recvfrom(4, "\32 \205\200\0\1\0\1\0\1\0\0\4ldap\10answeron\3com\0\0"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 104 close(4) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, 28) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 gettimeofday({1276265920, 858536}, NULL) = 0 poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLOUT}]) sendto(4, "I\375\1\0\0\1\0\0\0\0\0\0\4ldap\10answeron\3com\0\0"..., 35, MSG_NOSIGNAL, NULL, 0) = 35 poll([{fd=4, events=POLLIN}], 1, 5000) = 1 ([{fd=4, revents=POLLIN}]) ioctl(4, FIONREAD, [139]) = 0 recvfrom(4, "I\375\205\200\0\1\0\2\0\2\0\2\4ldap\10answeron\3com\0\0"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.10.20")}, [16]) = 139 close(4) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.20.0.30")}, 16) = -1 EINPROGRESS (Operation now in progress) poll([{fd=4, events=POLLOUT|POLLERR|POLLHUP}], 1, 120000 And thats where it stops, right there after that last 120000.... Using strace, I can obviously CTRL+C to keep going. But like I said, normally the terminal completely freezes. Anyone have any clues?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Deploying Django on EC2 using Bitnami Djangostack: WSGI script cannot be loadded

    - by Arman
    I've been struggling to deploy Django application on Amazon EC2 using Bitnami Djangostack for the last couple of days. When I go to http://dewey.io I see the default bitnami page (/opt/bitnami/apache2/htdocs/index.html), however, when I open http://dewey.io/portnoy, I get 'Internal Server Error'. But it's known that if mod_wsgi is setup correctly, the DocumentRoot value from httpd.conf is ignored, thus, I should see my Django application when accessing http://dewey.io. Essentially, the main error is this - 'Target WSGI script cannot be loaded as Python module'. Two questions: 1) any ideas how to fix these mod_wsgi errors (the Apache logs are below)? 2) how to disable the default /opt/bitnami/apache2/htdocs/index.html page and show my homepage from django application when accessing http://dewey.io? Thank you in advance! The details On my EC2 instance I"m running 64-bit Ubuntu 12.04 with DjangoStack 1.4-1. My Django project is located here - /opt/bitnami/apps/django/django_projects/portnoy. root@dewey:/opt/bitnami/apps/django/django_projects/portnoy# ls manage.py README.md settings.py site_media users Procfile sandbox static test.py topics urls.py views.py __init__.pyc templates testviews.py Apache error logs (/opt/bitnami/apache2/logs/error_log): [Wed Jul 04 02:29:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Target WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi' cannot be loaded as Python module. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Exception occurred processing WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi'. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] Traceback (most recent call last): [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/scripts/django.wsgi", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import django.core.handlers.wsgi [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django import http [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/__init__.py", line 119, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.http.multipartparser import MultiPartParser [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/multipartparser.py", line 13, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.utils.text import unescape_entities [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/utils/text.py", line 4, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from gzip import GzipFile [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/gzip.py", line 10, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/io.py", line 60, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import _io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] ImportError: /opt/bitnami/python/lib/python2.7/lib-dynload/_io.so: undefined symbol: PyUnicodeUCS2_AsEncodedString [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:44:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico Let me quickly introduce the contents of the files to make the case more concrete. This is my /etc/apache2/sites-available/default file <VirtualHost *:80> ServerAdmin [email protected] ServerName dewey.io Alias /site_media/ /opt/bitnami/apps/django/django_projects/portnoy/site_media/ Alias /static/ /opt/bitnami/apps/django/lib/python2.7/site-packages/django/contrib/admin/static/ Alias /robots.txt /opt/bitnami/apps/django/django_projects/portnoy/site_media/robots.txt Alias /favicon.ico /opt/bitnami/apps/django/django_projects/portnoy/site_media/favicon.ico CustomLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/access.log.%Y%m%d-%H%M%S 5M" combined ErrorLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/error.log.%Y%m%d-%H%M%S 5M" LogLevel warn WSGIProcessGroup dewey.io WSGIScriptAlias / /opt/bitnami/apps/django/scripts/django.wsgi <Directory /opt/bitnami/apps/django/django_projects/portnoy/site_media> Order deny,allow Allow from all Options -Indexes FollowSymLinks </Directory> <Directory /opt/bitnami/apps/django/django_projects/portnoy/conf/apache> Order deny,allow Allow from all </Directory> </VirtualHost> This is my /opt/bitnami/apps/django/scripts/django.wsgi file import os, sys sys.path.append('/opt/bitnami/apps/django/lib/python2.7/site-packages/') sys.path.append('/opt/bitnami/apps/django/django_projects') sys.path.append('/opt/bitnami/apps/django/django_projects/portnoy') os.environ['DJANGO_SETTINGS_MODULE'] = 'portnoy.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Here is the relevant portion of /opt/bitnami/apache2/conf/httpd.conf file: ServerRoot "/opt/bitnami/apache2" Listen 80 ServerName dewey.io DocumentRoot "/opt/bitnami/apache2/htdocs" LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python Include "/opt/bitnami/apache2/conf/ssi.conf" Include "/opt/bitnami/apps/django/conf/django.conf" Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf"

    Read the article

  • Need some help on how to replay the last game of a java maze game

    - by Marty
    Hello, I am working on creating a Java maze game for a project. The maze is displayed on the console as standard output not in an applet. I have created most of hte code I need, however I am stuck at one problem and that is I need a user to be able to replay the last game i.e redraw the maze with the users moves but without any input from the user. I am not sure on what course of action to take, i was thinking about copying each users move or the position of each move into another array, as you can see i have 2 variables which hold the position of the player, plyrX and plyrY do you think copying these values into a new array after each move would solve my problem and how would i go about this? I have updated my code, apologies about the textIO.java class not being present, not sure how to resolve that exept post a link to TextIO.java [TextIO.java][1] My code below is updated with a new array of type char to hold values from the original maze (read in from text file and displayed using unicode characters) and also to new variables c_plyrX and c_plyrY which I am thinking should hold the values of plyrX and plyrY and copy them into the new array. When I try to call the replayGame(); method from the menu the maze loads for a second then the console exits so im not sure what I am doing wrong Thanks public class MazeGame { //unicode characters that will define the maze walls, //pathways, and in game characters. final static char WALL = '\u2588'; //wall final static char PATH = '\u2591'; //pathway final static char PLAYER = '\u25EF'; //player final static char ENTRANCE = 'E'; //entrance final static char EXIT = '\u2716'; //exit //declaring member variables which will hold the maze co-ordinates //X = rows, Y = columns static int entX = 0; //entrance X co-ordinate static int entY = 1; //entrance y co-ordinate static int plyrX = 0; static int plyrY = 1; static int exitX = 24; //exit X co-ordinate static int exitY = 37; //exit Y co-ordinate //static member variables which hold maze values //used so values can be accessed from different methods static int rows; //rows variable static int cols; //columns variable static char[][] maze; //defines 2 dimensional array to hold the maze //variables that hold player movement values static char dir; //direction static int spaces; //amount of spaces user can travel //variable to hold amount of moves the user has taken; static int movesTaken = 0; //new array to hold player moves for replaying game static char[][] mazeCopy; static int c_plyrX; static int c_plyrY; /** userMenu method for displaying the user menu which will provide various options for * the user to choose such as play a maze game, get instructions, etc. */ public static void userMenu(){ TextIO.putln("Maze Game"); TextIO.putln("*********"); TextIO.putln("Choose an option."); TextIO.putln(""); TextIO.putln("1. Play the Maze Game."); TextIO.putln("2. View Instructions."); TextIO.putln("3. Replay the last game."); TextIO.putln("4. Exit the Maze Game."); TextIO.putln(""); int option; //variable for holding users option TextIO.put("Type your choice: "); option = TextIO.getlnInt(); //gets users option //switch statement for processing menu options switch(option){ case 1: playMazeGame(); case 2: instructions(); case 3: if (c_plyrX == plyrX && c_plyrY == plyrY)replayGame(); else { TextIO.putln("Option not available yet, you need to play a game first."); TextIO.putln(); userMenu(); } case 4: System.exit(0); //exits the user out of the console default: TextIO.put("Option must be 1, 2, 3 or 4"); } } //end of userMenu /**main method, will call the userMenu and get the users choice and call * the relevant method to execute the users choice. */ public static void main(String[]args){ userMenu(); //calls the userMenu method } //end of main method /**instructions method, displays instructions on how to play * the game to the user/ */ public static void instructions(){ TextIO.putln("To beat the Maze Game you have to move your character"); TextIO.putln("through the maze and reach the exit in as few moves as possible."); TextIO.putln(""); TextIO.putln("Your characer is displayed as a " + PLAYER); TextIO.putln("The maze exit is displayed as a " + EXIT); TextIO.putln("Reach the exit and you have won escaped the maze."); TextIO.putln("To control your character type the direction you want to go"); TextIO.putln("and how many spaces you want to move"); TextIO.putln("for example 'D3' will move your character"); TextIO.putln("down 3 spaces."); TextIO.putln("Remember you can't walk through walls!"); boolean insOption; //boolean variable TextIO.putln(""); TextIO.put("Do you want to play the Maze Game now? (Y or N) "); insOption = TextIO.getlnBoolean(); if (insOption == true)playMazeGame(); else userMenu(); } //end of instructions method /**playMazeGame method, calls the loadMaze method and the charMove method * to start playing the Maze Game. */ public static void playMazeGame(){ loadMaze(); plyrMoves(); } //end of playMazeGame method /**loadMaze method, loads the 39x25 maze from the MazeGame.txt text file * and inserts values from the text file into the maze array and * displays the maze on screen using the unicode block characters. * plyrX and plyrY variables are set at their staring co ordinates so that when * a game is completed and the user selects to play a new game * the player character will always be at position 01. */ public static void loadMaze(){ plyrX = 0; plyrY = 1; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end of loadMaze method /**redrawMaze method, method for redrawing the maze after each move. * */ public static void redrawMaze(){ TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end redrawMaze method /**replay game method * */ public static void replayGame(){ c_plyrX = plyrX; c_plyrY = plyrY; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions mazeCopy = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ mazeCopy[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == c_plyrX && j == c_plyrY){ c_plyrX = i; c_plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (mazeCopy[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (mazeCopy[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (mazeCopy[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (mazeCopy[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end replayGame method /**plyrMoves method, method for moving the players character * around the maze. */ public static void plyrMoves(){ int nplyrX = plyrX; int nplyrY = plyrY; int pMoves; direction(); //UP if (dir == 'U' || dir == 'u'){ nplyrX = plyrX; nplyrY = plyrY; for(pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrX =plyrX + 1; } else { plyrX = plyrX-spaces; c_plyrX = plyrX; movesTaken++; } } }//end UP if //DOWN if (dir == 'D' || dir == 'd'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves ++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrX = plyrX+1; } else{ plyrX = plyrX+spaces; c_plyrX = plyrX; movesTaken++; } } } //end DOWN if //LEFT if (dir == 'L' || dir =='l'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrY = plyrY + 1; } else{ plyrY = plyrY-spaces; c_plyrY = plyrY; movesTaken++; } } } //end LEFT if //RIGHT if (dir == 'R' || dir == 'r'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrY += 1; } else{ plyrY = plyrY+spaces; c_plyrY = plyrY; movesTaken++; } } } //end RIGHT if //prints message if player escapes from the maze. if (maze[plyrX][plyrY] == '3'){ TextIO.putln("****Congratulations****"); TextIO.putln(); TextIO.putln("You have escaped from the maze."); TextIO.putln(); userMenu(); } else{ movesTaken++; redrawMaze(); plyrMoves(); } } //end of plyrMoves method /**direction, method * */ public static char direction(){ TextIO.putln("Enter the direction you wish to move in and the distance"); TextIO.putln("i.e D3 = move down 3 spaces"); TextIO.putln("U - Up, D - Down, L - Left, R - Right: "); dir = TextIO.getChar(); if (dir =='U' || dir == 'D' || dir == 'L' || dir == 'R' || dir == 'u' || dir == 'd' || dir == 'l' || dir == 'r'){ spacesMoved(); } else{ loadMaze(); TextIO.putln("Invalid direction!"); TextIO.put("Direction must be one of U, D, L or R"); direction(); } return dir; //returns the value of dir (direction) } //end direction method /**spaces method, gets the amount of spaces the user wants to move * */ public static int spacesMoved(){ TextIO.putln(" "); spaces = TextIO.getlnInt(); if (spaces <= 0){ loadMaze(); TextIO.put("Invalid amount of spaces, try again"); spacesMoved(); } return spaces; } //end spacesMoved method } //end of MazeGame class

    Read the article

  • Certificate Trusts Lists in IIS7

    - by BrettRobi
    I am trying to enable mutual authentication for my WebService hosted in IIS7. I have the server side cert setup and working but cannot figure out how to get a Certificate Trust List created and setup in IIS7 so that I can require and validate client side certificates. All of my client side certs are signed by my own root cert so I need to create a CTL that contains just my root cert and then have IIS validate client provided certs against the CTL. Can anyone shed some light on how to do this? IIS6 had a UI for assigning a CTL, but I can find nothing similar in IIS7. Update: I have now successfully used MakeCTL in wizard mode to create a CTL with a Friendly Name. However I don't have adsutil support on my IIS7 box so via other posts elsewhere I am trying to use the 'netsh http add sslcert' command to assign the CTL to my site. Before I could use this command I had to remove the existing SSL cert that was assigned to my site for server authentication. Then in my netsh command I specify the thumbprint of that very same SSL cert I removed, plus a made up appid, plus 'sslctlidentifier=MyCTL sslctlstorename=CA'. The resulting command is: netsh http add sslcert ipport=10.10.10.10:443 certhash=adfdffa988bb50736b8e58a54c1eac26ed005050 appid={ffc3e181-e14b-4a21-b022-59fc669b09ff} sslctlidentifier=MyCTL sslctlstorename=CA (the IP addr is munged), but I am getting this error: SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated. I am sure the error is related to the CTL options because if I remove them it works (though no CTL is assigned of course). Can anyone help me take this last step and make this work? UPDATE 01-07-2010: I never resolved this with IIS 7.0 and have since migrated our app to IIS 7.5 and am giving this another try. Per the response from Taras Chuhay I installed IIS6 Compatibility on my test server and tried the steps he documented using adsutil.vbs (which can also be found here). I immediately ran into this error: ErrNumber: -2147023584 Error trying to SET the Property: SslCtlIdentifier when running this command: adsutil.vbs set w3svc/1/SslCtlIdentifier MyFriendlyName I then went on to try the next adsutil.vbs command documented and it failed with the same error. I have verified that the CTL I created has a Friendly Name of MyFriendlyName and that it exists in the 'Intermediate Certification Authorities\Certificate Trust List' store of LocalComputer. So once again I am at a dead standstill. I don't know what else to try. Has anyone ever gotten CTL's to work with IIS7 or 7.5? Ever? Am I beating a DEAD horse. Google turns up nothing but my own posts and other similar stories. Update 2/23/10 - I've confirmed with Microsoft that this is a bug with IIS 7.5, but it does work with IIS 7. Check out this link for details: http://viisual.net/configuration/IIS7-CTLs.htm Update 6/08/10 - I can now confirm that KB981506 resolves this issue. There is a patch associated with this KB that must be applied to Server 2008 R2 machines to enable this functionality. Once that is installed all works flawlessly for me.

    Read the article

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • shell script over SSH ends unexpectedly after running 'ant build'

    - by YShin
    I wrote a shell script that runs on remote host to build source code with 'ant build' command, and then distribute the built binary to other servers. However, right after Ant build is over successfully(I can see the command line output saying Build was successful), the ssh session ends and whatever commands after 'ant build' does not get executed. I'm confused what might be cause of this behavior. I suspected that it might be because the 'ant build' command takes too long time, and SSH somehow quits itself after that long command. But I don't think that's correct since if I just do 'sleep 60' in place of 'ant build' command, it actually execute latter commands as intended. I'm new at shell programming, so I might have made some silly misassumption. Can someone provide a pointer to a possible cause of this problem? My shell script #!/bin/bash # Inject some variables ssh -T $SSH_USER@$SSH_URL "setenv REMOTE_BASE_DIR $REMOTE_BASE_DIR; setenv CASSANDRA_SRC_TAR_FILE $CASSANDRA_SRC_TAR_FILE; setenv CASSANDRA_SRC_DIR_NAME $CASSANDRA_SRC_DIR_NAME; setenv CLUSTER_SIZE $CLUSTER_SIZE; setenv REMOTE_REDEPLOY_SCRIPT $REMOTE_REDEPLOY_SCRIPT; /bin/bash" << 'EOF' export JAVA_HOME=/usr/lib/jvm/jdk1.7.0 cd $REMOTE_BASE_DIR/$CASSANDRA_SRC_DIR_NAME echo "## Building Cassandra source" ant clean build # Anything after this doesn't run echo "## Ant Build is over. Invoking redeploy script on remote nodes" # Invoke redeploy script for each node for (( i=0; i < CLUSTER_SIZE; i++)) do echo "## Invoking redeploy script on node-$i" done Command-line output ## Building Cassandra source Buildfile: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build.xml clean: [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/gen-java [delete] Deleting directory /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config init: [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/thrift [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test/lib [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/test/classes [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/gen-java maven-ant-tasks-localrepo: maven-ant-tasks-download: maven-ant-tasks-init: maven-declare-dependencies: maven-ant-tasks-retrieve-build: init-dependencies: [echo] Loading dependency paths from file: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/build-dependencies.xml check-gen-cli-grammar: gen-cli-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cli/Cli.g .... check-gen-cql2-grammar: gen-cql2-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cql/Cql.g ... check-gen-cql3-grammar: gen-cql3-grammar: [echo] Building Grammar /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/java/org/apache/cassandra/cql3/Cql.g ... build-project: [echo] apache-cassandra: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build.xml [javac] Compiling 43 source files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/thrift [javac] Note: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java uses or overrides a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] Compiling 865 source files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. createVersionPropFile: [mkdir] Created dir: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config [propertyfile] Creating new property file: /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/src/resources/org/apache/cassandra/config/version.properties [copy] Copying 3 files to /scratch/ISS/shin14/repos/apache-cassandra-2.0.8-src-0713/build/classes/main build: BUILD SUCCESSFUL Total time: 32 seconds

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

< Previous Page | 834 835 836 837 838 839 840 841 842 843 844 845  | Next Page >