Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 389/1754 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • How to import this data set into excel? (column headings on each row delimited by a colon)

    - by Anonymous
    I'm trying to import the following data set into Excel. I've had no luck with the text import wizard. I'd like Excel to make id, name, street, etc the column names and insert each record onto a new row. , id: sdfg:435-345, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info , id: sdfg:435-345f, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info Is there any easy way to do this with Excel? I'm struggling to think of a way to convert this to a conventional CSV easily. As far as I can think, I'd have to remove the labels from each line, enclose each line in quotes, then delimit them with commas. Obviously that's made a little more difficult to script though seeing as some fields (address, for instance) contain comma-delimited data. I'm not good with regex at all. What's the best way to tackle this?

    Read the article

  • Scripting around the lack of user:password@domain url functionality in jscript/IE

    - by Idiomatic
    I currently have a jscript that runs a php script on a server for me, dead simple. But... I want to be atleast somewhat secure so I setup a login. Now if I use the regular user:password@domain system it won't work (IE decided it was a security issue). And if I let IE just remember the password then it pops up a security message confirming my login every time (which kills the point of the button). So I need a way to make the security message go away. I could lower security settings, which tbh I am fine with but nothing seems to make it fuck off (there might be some registry setting to change). Find a fix for jscript that will let me use a password in the url. There used to be a regedit that worked for older systems which allowed IE to use url passwords (not working on my 64bit windows7 setup) though I doubt that'd have helped jscript anyways (since it outright crashes). Use an app other than IE. Inwhich case I'm not sure how to go about it, I want it to be responsive and invisible so IE was a good choice. It is near instant. Use XMLHttpRequest instead of IE directly? May even be faster but I've no idea if it'd help or just have the same error. Use a completely different approach. Maybe some app that can script website browsing. var args = {}; var objIEA = new ActiveXObject("InternetExplorer.Application"); if( WScript.Arguments.Item(0) == "pause" ){ objIEA.navigate("http://domain/index.html?pause"); } if( WScript.Arguments.Item(0) == "next" ){ objIEA.navigate("http://domain/index.html?next"); } objIEA.visible = false; while(objIEA.readyState != 4) {} objIEA.quit();

    Read the article

  • Why just splitting an Ethernet cable does not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically one-line communication bus (for argument's sake, I am excluding hubs). All machines attached in the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why splitting one Ethernet line from my home router into two and connecting two computers would not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room and I had put a computer in another room (also my room, though). Since wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • java warnings on linux

    - by Geo Papas
    Hello i am getting warnings after i have installed java on kubuntu 11.10. The java programs run but i always get 4 warnings: $ java Warning: no leading - on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: missing VM type on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: no leading - on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: missing VM type on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' What am i missing? Thanks in advance! Here is the file content /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg : /usr/lib/jvm/java-6-sun # # %W% %E% # # Copyright (c) 2006, Oracle and/or its affiliates. All rights reserved. # ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. # # List of JVMs that can be used as an option to java, javac, etc. # Order is important -- first in this list is the default JVM. # NOTE that this both this file and its format are UNSUPPORTED and # WILL GO AWAY in a future release. # # You may also select a JVM in an arbitrary location with the # "-XXaltjvm=<jvm_dir>" option, but that too is unsupported # and may not be available in a future release. # -server KNOWN -client IGNORE -hotspot ERROR -classic WARN -native ERROR -green ERROR

    Read the article

  • cisco asa + action drop issue

    - by ghp
    Have created a tunnel between 10.x.y.z network and 122.a.b.c ..the tunnel is up and active, but when I try the packet tracer output ..I get the ACTION as drop. I have also enabled same-security-traffic permit intra-interface. Can someone help me what does this drop mean? Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Packet Tracer output @Shane Madden: please find below the packet tracer output. CASA5K-A# CASA5K-A# config t CASA5K-A(config)# packet-tracer input inside tcp 10.x.y.112 0 122.a.b.c 0 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule CASA5K-A(config)# ======================================================================== The access-group are as follows : access-group acl-inbound in interface outside access-group acl-outbound in interface inside and the access-list's are access-list acl-inbound extended permit tcp any any gt 1023 access-list acl-outbound extended permit ip object-group net-Source object net-dest

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • How to reload sarg configuration file

    - by black sensei
    I have installed sarg for a while on ubuntu 12.04 and forgotten that I wanted it to generate reports inside /var/www/vhosts/reports.lan/htdocs. I have found out this morning and made the change in the /etc/sarg/sarg.conf. After I have manually run sarg with # sarg-reports today but it generated in the old folder /var/lib/sarg . I know I could do a symbolic link but I was surprised that I could not find a single command to reload or at least restart sarg. Can anyone give me the command to restart or reload it? Thanks

    Read the article

  • nvcc not found, but only when using sudo

    - by dsp_099
    I can't get ANYTHING working on linux. I'm trying to compile CudaMiner. sudo make: ypt-jane.o `test -f 'scrypt-jane.cpp' || echo './'`scrypt-jane.cpp mv -f .deps/cudaminer-scrypt-jane.Tpo .deps/cudaminer-scrypt-jane.Po nvcc -g -O2 -Xptxas "-abi=no -v" -arch=compute_10 --maxrregcount=64 --ptxas-options=-v -I./compat/jansson -o salsa_kernel.o -c salsa_kernel.cu /bin/bash: nvcc: command not found make[2]: *** [salsa_kernel.o] Error 127 make[2]: Leaving directory `/var/progs/CudaMiner' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/var/progs/CudaMiner' make: *** [all] Error 2 So, kind of interesting: nvcc: nvcc fatal : No input files specified; use option --help for more information Whereas sudo nvcc: sudo: nvcc: command not found Huh?? I have identical exports listed in ~/.bashrc AND /etc/bash.bashrc. (Nvcc is located in: /usr/local/cuda-5.0/bin/nvcc) I also tried changing the current path, to no avail: $ sudo bash -c 'echo $PATH' /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin $ PATH=$PATH:/usr/local/cuda-5.0/bin/nvcc $ sudo bash -c 'echo $PATH' /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Thanks in advance!

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • Google Chrome proxy settings?

    - by becko
    When I try to set Google Chrome's proxy settings (on chrome://linux-proxy-config/), I get the following message: When running Google Chrome under a supported desktop environment, the system proxy settings will be used. However, either your system is not supported or there was a problem launching your system configuration. But you can still configure via the command line. Please see man google-chrome-stable for more information on flags and environment variables. I need to set proxy settings to use Chrome, but I don't want to be setting them in the command line every time I use Chrome. Is there a way to set these settings permanently? Also, is there an option in Chrome so that it doesn't use proxy for specific domains (analogous to the No proxy for setting in Firefox)?

    Read the article

  • How to create a slipstreamed SharePoint Server 2010 SP1 and August Cumulative Update install

    - by ybbest
    When install SharePoint2010 ,you normally need to install the base the install and then install each Service Pack and cumulative update.Fortunately , there is an easy way to install the base and all the update at once.It is normally called slipstream installation.You need to follow the steps below. 1.Open the command prompt and extract the file using the command below. office2010-kb2553048-fullfile-x64-glb.exe \extract.\SP2010 Aug Update 2.Doing the same for SP1 and August cumulative update. 3.Next , you need to copy all the update files to the Updates folder under the base install. 4.Now , you are ready to install SharePoint2010 now , just click the PrerequisiteInstaller to install the prerequisite files. 5.Finally , you can click the setup.exe to start the installation. References: SharePoint Server 2010 SP1 SharePoint Foundation SP1 Service Pack 1 for SharePoint 2010 Products is Now Available for Download SharePoint Patching and “Action Required” Updates for SharePoint 2010 Products SharePoint Patching and “Action Required”  

    Read the article

  • SQL SERVER – Check the Isolation Level with DBCC useroptions

    - by pinaldave
    In recent consultancy project coordinator asked me – “can you tell me what is the isolation level for this database?” I have worked with different isolation levels but have not ever queried database for the same. I quickly looked up bookonline and found out the DBCC command which can give me the same details. You can run the DBCC UserOptions command on any database to get few details about dateformat, datefirst as well isolation level. DBCC useroptions Set Option                  Value --------------------------- -------------- textsize                    2147483647 language                    us_english dateformat                  mdy datefirst                   7 lock_timeout                -1 quoted_identifier           SET arithabort                  SET ansi_null_dflt_on           SET ansi_warnings               SET ansi_padding                SET ansi_nulls                  SET concat_null_yields_null     SET isolation level             read committed I thought this was very handy script, which I have not used earlier. Thanks Gary for asking right question. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • Firefox not detecting Flash 11

    - by user34103
    I installed the Flash 11 plugin using the software center (and have also removed the reinstalled it via command-line in the terminal), yet Firefox still claims the latest version of the plugin I have is 10. (And just to clarify, I have been sure to reboot both Firefox and the entire computer after installing). On further investigation (this may be a red herring, pardon) I ran the uname -a command-line in terminal to assure that I was running the 64-bit version of Ubuntu, and received this feedback: 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:25:36 UTC 2011 i686 i686 i386 GNU/Linux I don't understand the series "i686 i686 i386". Which applies to my version of Ubuntu? Does this mean I've accidentally installed 32-bit Ubuntu? Very much a beginner here - I've combed the threads but have so little understanding what my exact issue is that I haven't been able to find an answer.

    Read the article

  • error while running ruby application at system startup in ubuntu

    - by anjo
    I am on Ubuntu 12.04 machine. Have a script file which runs when entered manually in terminal gnome-terminal -e /home/precise/Desktop/cartodb/script.sh The content of script file is cd /home/ubuntupc/Desktop/cartodb20/ sh /home/ubuntupc/.rvm/scripts/rvm bundle exec foreman start -p 3000 So what i tried to do is to run this script at every system start up. So on Startup Applications command: gnome-terminal -e /home/precise/Desktop/cartodb/script.sh On terminal Edit - Profile Preferences - Title and Command Checked the "Run command as a login shell" But this seems to be not working. When restarted the machine found these error in terminal The child process exited normally with status 127. ERROR: RVM Ruby not used, run `rvm use ruby` first. Some info regarding the installed packages and system. $ which ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ which rails /home/ubuntupc/.rvm/gems/ruby-1.9.2-p320/bin/rails $ which gem /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/gem $ cat ~/.bash_profile [[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function* $ which -a ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ sudo update-alternatives --config ruby update-alternatives: error: no alternatives for ruby. $ sudo find / -name "rubygems" -print /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/lib/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/doc/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/lib/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/rubygems /home/ubuntupc/.rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/scripts/rubygems /usr/lib/ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/lib/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/doc/rubygems /usr/local/rvm/src/rubygems-2.2.0/lib/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems/rubygems /usr/local/rvm/src/rvm/scripts/functions/rubygems /usr/local/rvm/src/rvm/scripts/rubygems /usr/local/rvm/scripts/functions/rubygems /usr/local/rvm/scripts/rubygems Please point out what i am missing as i am new to the ruby applications. Thanks in advance

    Read the article

  • ASP.NET MVC 3: Razor’s @: and <text> syntax

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (today) In today’s post I’m going to discuss two useful syntactical features of the new Razor view-engine – the @: and <text> syntax support. Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post.  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a list of products: When run, it generates output like:   One of the techniques that Razor uses to implicitly identify when a code block ends is to look for tag/element content to denote the beginning of a content region.  For example, in the code snippet above Razor automatically treated the inner <li></li> block within our foreach loop as an HTML content block because it saw the opening <li> tag sequence and knew that it couldn’t be valid C#.  This particular technique – using tags to identify content blocks within code – is one of the key ingredients that makes Razor so clean and productive with scenarios involving HTML creation. Using @: to explicitly indicate the start of content Not all content container blocks start with a tag element tag, though, and there are scenarios where the Razor parser can’t implicitly detect a content block. Razor addresses this by enabling you to explicitly indicate the beginning of a line of content by using the @: character sequence within a code block.  The @: sequence indicates that the line of content that follows should be treated as a content block: As a more practical example, the below snippet demonstrates how we could output a “(Out of Stock!)” message next to our product name if the product is out of stock: Because I am not wrapping the (Out of Stock!) message in an HTML tag element, Razor can’t implicitly determine that the content within the @if block is the start of a content block.  We are using the @: character sequence to explicitly indicate that this line within our code block should be treated as content. Using Code Nuggets within @: content blocks In addition to outputting static content, you can also have code nuggets embedded within a content block that is initiated using a @: character sequence.  For example, we have two @: sequences in the code snippet below: Notice how within the second @: sequence we are emitting the number of units left within the content block (e.g. - “(Only 3 left!”). We are doing this by embedding a @p.UnitsInStock code nugget within the line of content. Multiple Lines of Content Razor makes it easy to have multiple lines of content wrapped in an HTML element.  For example, below the inner content of our @if container is wrapped in an HTML <p> element – which will cause Razor to treat it as content: For scenarios where the multiple lines of content are not wrapped by an outer HTML element, you can use multiple @: sequences: Alternatively, Razor also allows you to use a <text> element to explicitly identify content: The <text> tag is an element that is treated specially by Razor. It causes Razor to interpret the inner contents of the <text> block as content, and to not render the containing <text> tag element (meaning only the inner contents of the <text> element will be rendered – the tag itself will not).  This makes it convenient when you want to render multi-line content blocks that are not wrapped by an HTML element.  The <text> element can also optionally be used to denote single-lines of content, if you prefer it to the more concise @: sequence: The above code will render the same output as the @: version we looked at earlier.  Razor will automatically omit the <text> wrapping element from the output and just render the content within it.  Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s smart detection of <tag> elements to identify the beginning of content regions is one of the reasons that the Razor approach works so well with HTML generation scenarios, and it enables you to avoid having to explicitly mark the beginning/ending of content regions in about 95% of if/else and foreach scenarios. Razor’s @: and <text> syntax can then be used for scenarios where you want to avoid using an HTML element within a code container block, and need to more explicitly denote a content region. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Chester Devs Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/11/11/chester-devs-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at Chester Devs Thanks to Fran Hoey and all the people from Chester Devs. It was a hard drive up and back but the enthusiasm of the audience, with some great questions does make it worthwhile. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/ChesterDevs-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/ChesterDevs-EventStore/blob/master/Powerpoint/Huddle-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’. Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'. Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • How to fix: Handler “PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list

    - by ybbest
    Issue: Recently, I am having issues with deploying asp.net mvc 4 application to Windows Server 2008 R2.After add the necessary role and features and I setup an application in IIS. However , I received the following error message: PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list   Solution: It turns out that this is because ASP.Net was not completely installed with IIS even though I checked that box in the “Add Feature” dialog.   To fix this, I simply ran the following command at the command prompt %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i If I had been on a 32 bit system, it would have looked like the following: %windir%\Microsoft.NET\Framework\v4.0.21006\aspnet_regiis.exe –i   References: http://stackoverflow.com/questions/6846544/how-to-fix-handler-pagehandlerfactory-integrated-has-a-bad-module-managedpip

    Read the article

  • Un-failing over a Cisco PIX 515e

    - by ABrown
    We had a power outage at our data center last week and when our dual PIX 515E running IOS 7.0(8) (configured with a failover cable) came back, they were in a failed over state where the Secondary unit is active and the Primary unit is standby I have tried 'failover reset', 'failover active', and 'failover reload-standby' as well as executing reloads on both units in a variety of orders, and they don't come back Primary/Active Secondary/Standby. The only thing in my arsenal that I haven't tried is driving to the data center and performing a hard reboot, which I hate to do. I have read How Failover Works on the Cisco Secure Firewall and it seems like this should be wicked straight forward. output of show failover on Primary: Failover On Cable status: Normal Failover unit Primary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:52:05 UTC Mar 10 2010 This host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Other host: Secondary - Active Active time: 897045 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. output of show failover on Secondary: Failover On Cable status: Normal Failover unit Secondary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:03:04 UTC Feb 28 2010 This host: Secondary - Active Active time: 896925 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Other host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. I'm seeing the following in my syslog: Mar 10 03:05:00 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:05:09 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reload-standby' command. Mar 10 03:05:12 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:05:12 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:06:09 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:06:10 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:06:10 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:06:23 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:06:23 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:06:24 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. Mar 10 03:07:05 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:07:31 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover active' command. Mar 10 03:08:04 fw1 %PIX-5-611103: User logged out: Uname: enable_1 Mar 10 03:08:04 fw1 %PIX-6-315011: SSH session from admin1_int on interface inside for user "pix" terminated normally Mar 10 03:08:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:08:39 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:09:10 fw1 %PIX-6-605005: Login permitted from admin1_int/36891 to inside:192.168.4.4/ssh for user "pix" Mar 10 03:09:23 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:09:38 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:09:52 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:09:52 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:09:53 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. I'm not exactly sure how to interpret that syslog data. Primary doesn't seem to even try to become Active. When I reload the individual units separately, my connections are retained, so it doesn't seem like I have a real hardware failure. Is there something I can query (IOS or SNMP) to check for hardware issues? Any thoughts? My IOS-fu is weak. Thanks for any help you might provide, Aaron

    Read the article

  • DDD North 3 Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/15/ddd-north-3-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at DDD North Thanks to all the people who helped organise the cracking conference that is DDD North 3, returning to Sunderland, and the great facilities at the University of Sunderland, and the fine drinks reception at Sunderland Software City.  The whole event wouldn’t be possible without the sponsors who ensured over 400 people were kept fed and watered so they could enjoy the impressive range of sessions. And lastly, a thank you to all those delegates who gave up their free time on a Saturday to spend a day dashing between lecture rooms, including a late change to my room which saw 40 people having to brave a journey between buildings in the fine drizzle. The enthusiasm from the delegates always helps recharge my geek batteries. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/DDDNorth3-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/DDDNorth3-EventStore/blob/master/Powerpoint/DDDNorth-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’.   Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'.   Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • How to fix “Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime” in PowerGUI

    - by ybbest
    Today, when I try to run some PowerShell command against SharePoint in PowerGUI , I encounter some error message as below: Problem: Remove-SPSite : Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime. At C:\SiteCreation.ps1:37 char:14 + CategoryInfo : InvalidData: (Microsoft.Share…mdletRemoveSite:SPCmdletRemoveSite) [Remove-SPSite], PlatformNotSupportedException Analysis: The error message is pretty clear that PowerGUI try to run the PowerShell command under .Net version 4.0 which is not supported by SharePoint2010, SharePoint2010 only support .Net 3.5.So how can I change the settings so that PowerShell does run under .Net3.5 in PowerGui? The solution is pretty easy. Solution: 1. Open your windows explorer and navigate to C:\Program Files (x86)\PowerGUI\ and open the configuration file ScriptEditor.exe.config. 2. Change the supportedRuntime version under Startup settings by removing the version=”v4.0″ as below From To   3. Restart your PowerGUI and rerun your script. It works like a charm.

    Read the article

  • Enhance Internet Explorer 9 with Add-Ons

    - by Lori Kaufman
    If you’re one of those who still use that “other” browser (Internet Explorer or IE), you’ll be glad to know that there are ways to extend the functionality of IE just like you can in Firefox or Chrome. There are not as many add-ons for Internet Explorer as there are for Firefox and Chrome, but you can explore the official Internet Explorer Gallery to see if there are any that peek your interest. In this article, we show you how to install add-ons in Internet Explorer 9. To begin, activate the Command bar, if it’s not already available. Right-click on an empty area of the tab bar and select Command bar from the popup menu. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • CodePlex Daily Summary for Friday, May 14, 2010

    CodePlex Daily Summary for Friday, May 14, 2010New ProjectsCampfire#: Campfire# is a campfire client written in .NET 4.0 using WPF, which uses the Campfire API.CHESS: Systematic Concurrency Testing: CHESS is a tool for systematic and disciplined concurrency testing. Given a concurrent test, CHESS systematically enumerates the possible thread sc...cmpp: cmppcycloid: Arcanoid gameDotNetNuke® C#: The DotNetNuke® project is developed and maintained on a Visual Basic codebase, however a C# version has always been a popular request. This is a ...EasyBuildingCMS.NET: EasyBuildingCMS is an easy use content management system.fluidCMS: Provide for flexible management of web content that is not tightly integrated with the layout and rendering of sites that consume the content.Golem: An automation tool oriented to localization engineering environmentHB Batch Encoder Mk 2: HandBrake Batch Encoder Mk II This Program was adapted from an original project downloaded from codeplex by the name of "Handbrake Batch Encoder"...Integrating Social Media Networks: This is part of my pos graduation project.Ketonic: The Ketonic project aims to improve development of websites based on the Kentico CMS. LinkSharp: LinkSharp is a short-URL provider that can be used to generate short static non changing URL's. The web interface allows you to easily add / edit /...PUC NET (C++ Network Library - PUC Minas): This is an Academic Library for an Easy Development of Applications and Games based on Network Communication.Regular Expression Tester: Small utility for testing regular expressionsSharePoint User Management WebPart: SharePoint User Management WebPartSharpBox: SharpBox makes it easier for .NET developers to interact with existing cloud storage service, e.g. DropBox or Amazon S3Snipivit: Snipivit is a snippet manager service and VS2010 plugin that allows small development teams to store all their code snippets on a central database,...Software Factories Applied: Software Factories Applied is a project collecting the companion bits for the eponymous book to be published by Wiley & Sons in 2011. The authors ...The Ping Master: A service that periodically pings network addresses and allows the running of command line type utilities in response to success or failure.Title Safe Region Checker: A simple utility for XNA developers to check screenshots from games intended for release on the LIVE Marketplace for "title safe" region compliance...Trial project: sky is blueUyghur Named Date: Generate Uyghur named date string. ئۇيغۇرچە ئاي ناملىق چىسلا ھاسىل قىلىشWildcard Search Web Part for SharePoint 2010: The Wildcard Search web part for MOSS 2007 was wildly successful. Although, SharePoint 2010 has built-in wildcard searching functionality, the out...在线Office控件 Online Offical Control: 在线Office控件软件作品发布平台: SoftwarePublishPlatform 软件作品发布平台New ReleasesDemina: Demina Binaries version 0.1: Demina binaries are now available. This release (version 0.1) is an alpha version. Please report any bugs for extermination.EasyTFS: EasyTfs 1.0 Beta 2: Added cache refreshing when contents are updated rather than just every 10 minutes. Added window title based on currently-open case. Added attachme...Extending C# editor - Outlining, classification: Initial release: Initial releaseHB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.01: Binary release files.HB Batch Encoder Mk 2: Source Code: Source CodeHobbyBrew Mobile: Beta 2: Corretti numerosi bug, data un implementazione "approssimativa" del riscaldamento per Infusione. Aggiornamento consigliato!HouseFly controls: HouseFly controls beta 1.0.2.0: HouseFly controls relase 1.0.2.0 betaHtml Reader: Beta 2: I fixed a bug in HtmlElementCollection, Which exposed an integer enumerator, instead of enumerating through HtmlElements. I added a WPF Window tha...Html to OpenXml: HtmlToOpenXml 1.2: Fix some reported bug. See change set for description. The dll library to include in your project. The dll is signed for GAC support. Compiled wi...Infection Protection: Infection Protection 0.1: This is the final version of Infection Protection that was entered into the 2010 OGPC game competition.Jobping Url Shortener: Deploy Code 0.5.1: Deployment code for Version 0.5 This version includes our Jobping style.Jobping Url Shortener: Source Code 0.5.1: Source code for the 0.5 release. This release includes our Jobping style skin.Kooboo HTML form: Kooboo HTML form module 2.1.0.1: HTML form module contributed by member aledelgo. Add SMTP user and password authentication.KooBoo Image Galery: Beta 2: This new version corrects some issues pointed by Guoqi Zheng Some schema and folders were renamed, so it's better to uninstall the module and remo...MFCMAPI: May 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1020 The 64 bit bu...MVC Turbine: Release 2.1 for MVC2: This RTM contains the same features as v2.0 RTM plus these features: Instance Registration to IServiceLocator You can now add an instance of a typ...NazTek.Extension.Clr4: NazTek.Extension.Clr4 Binary: Binary releaseNazTek.Extension.Clr4: NazTek.Extension.Clr4 Source: Cab with source codeNSIS Autorun: NSIS Autorun 0.1.8: This release includes source code, executable binaries and example materials.Ottawa IT Day: 2010 Source Code and Presentations: During the Ottawa IT Day 2010, some of the presenters shared their code (and some presentations). This release is the culmination of all those effo...PHPWord: PHPWord 0.6.1 Beta: Changelog: Fixed Error when adding a JPEG image and opening in office 2007 Issue #1 Fixed Already defined constant PHPWORD_BASE_PATH Issue #2 F...Rapid Dictionary: Rapid Dictionary Alpha 2.0: Release Notes * Try auto updatable version: http://install.rapiddict.com/index.html Rapid Dictionary Alpha 2.0 includes such functionality: ...Shake - C# Make: Shake v0.1.18: Core changes. Process wrapper class, console logger, etc.SharpBox: SharpBox-Trunk: This is the SharpBox build from the trunk source branch!SharpBox: SharpBox-Trunk-Initial-Source: The initial source code, will be updated from time to timeSpackle.NET: 4.0.0.0 Release: This new drop contains the following A CreateBigInteger() method on SecureRandom to create random BigInteger values. An extension method to prop...StreamInsight example queries, input adapters and output adapters: StreamInsight Examples for V1.0 RTM: Zipped source code.The Ping Master: v0.1.0.0: Early release of The Ping Master for test purposes. Configuration tool is unfinished and does not include an installer.Title Safe Region Checker: Title Safe Region Checker v1.0.0.1: Release 1.0 of Title Safe Region Checker. No known bugs or problems. File is a zipped directory containing the necessary installation files.TortoiseHg: TortoiseHg 1.0.3: This is a bug fix release, we recommend all users upgrade to 1.0.3Usa*Usa Libraly: Smart.Windows.Navigation 0.4: Smart.Windows.Navigation simple navigation library ver 0.4.0. Include Windows Forms & Compact Framework samples. Information - Smart.Windows.Mvc ...VCC: Latest build, v2.1.30513.0: Automatic drop of latest buildWabbitStudio Z80 Software Tools: Wabbitcode: Wabbitcode is an Z80 Assembly IDE for Windows, OS X, and Linux. Built to take full advantage of the features of SPASM and Wabbitemu, Wabbitcode has...white: Release 0.20: Source Code: https://white-project.googlecode.com/svn/tags/0.20 Add few more keyboard keys like windows button and F13-F24. Fixed bugs for keyboar...Wildcard Search Web Part for SharePoint 2010: Version 1.0 Release 1: This is the initial release of the Wildcard Search Web Part for SharePoint 2010. All queries will be issued as wildcards unless disabled with the ...Windows Azure Command-line Tools for PHP Developers: Windows Azure Command-line Tools May 2010 Update: May 2010 Update – May 13, 2010 We are pleased to announced the May 2010 update of Windows Azure Command-Line Tools. In addition to bug fixes and i...WinXmlCook: WinXmlCook 2.1: Version 2.1 released!Xrns2XMod: Xrns2XMod 1.1: some source code optimization在线Office控件 Online Offical Control: SPOffice2.0Release: 该版本在MS Office2003/2007,WPS2009,WPS2010下测试通过Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrBlogEngine.NETPHPExcelMicrosoft Biology FoundationwhiteWindows Azure Command-line Tools for PHP DevelopersStyleCopShake - C# Make

    Read the article

  • How To Disconnect Non-Mapped UNC Path “Drives” in Windows

    - by The Geek
    Have you ever browsed over to another PC on your network using “network neighborhood”, and then connected to one of the file shares? Without a drive letter, how do you disconnect yourself once you’ve done so? Really confused as to what I’m talking about? Let’s walk through the process. First, imagine that you browse through and connect to a share, entering your username and password to gain access. The problem is that you stay connected, and there’s no visible way to disconnect yourself. If you try and shut down the other PC, you’ll receive a message that users are still connected. So let’s disconnect! Open up a command prompt, and then type in the following: net use This will give you a list of the connected drives, including the ones that aren’t actually mapped to a drive letter. To disconnect one of the connections, you can use the following command: net use /delete \\server\sharename For example, in this instance we’d disconnect like so: net use /delete \\192.168.1.205\root$ Now when you run the “net use” command again, you’ll see that you’ve been properly disconnected. If you wanted to actually connect to a share without mapping a drive letter, you can do the following: net use /user:Username \\server\sharename Password You could then just pop \\server\sharename into a Windows Explorer window and browse the files that way. Note that this technique should work exactly the same in any version of windows. Similar Articles Productive Geek Tips Remove "Map Network Drive" Menu Item from Windows Vista or XPDisable the Annoying "This page has an unspecified potential security risk" When Using Files on a Network ShareCopy Path of a File to the Clipboard in Windows 7 or VistaMap a Network Drive from XP to Windows 7Defrag Multiple Hard Drives At Once In Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily

    Read the article

  • How to fix “Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime” in PowerGUI

    - by ybbest
    Today, when I try to run some PowerShell command against SharePoint in PowerGUI , I encounter some error message as below: Problem: Remove-SPSite : Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime. At C:\SiteCreation.ps1:37 char:14 + CategoryInfo : InvalidData: (Microsoft.Share…mdletRemoveSite:SPCmdletRemoveSite) [Remove-SPSite], PlatformNotSupportedException Analysis: The error message is pretty clear that PowerGUI try to run the PowerShell command under .Net version 4.0 which is not supported by SharePoint2010, SharePoint2010 only support .Net 3.5.So how can I change the settings so that PowerShell does run under .Net3.5 in PowerGui? The solution is pretty easy. Solution: 1. Open your windows explorer and navigate to C:\Program Files (x86)\PowerGUI\ and open the configuration file ScriptEditor.exe.config. 2. Change the supportedRuntime version under Startup settings by removing the version=”v4.0″ as below From To   3. Restart your PowerGUI and rerun your script. It works like a charm.

    Read the article

  • Hosting a website on Heroku.... I know how to, but im running into problems!

    - by Thomas Miller
    I'm starting to learn more on the back-end scale of programing. Recently I started up Heroku for the second or third time. This time I actually installed the Git update to my Mac and installed Heroku in the terminal. I wanted to upload a static html site with the sinatra gem. Everything worked out fine inside the terminal, though I added sinatra after I got everything working and the file with the site hooked up to Heroku. In my logs I did see that I was missing the sinatra gem, so I installed it. My site contains both the proper app.rb and config.ru files. I have nothing showing up online. Just a blank screen! Contacting Heroku on this problem has been very difficult. I get a responce every day, and on every day I respond with a question to the answer that didn't help me at all. 2011-05-18T00:25:20+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:20] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:20+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T00:25:26+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:26] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:26+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=5ms bytes=313 2011-05-17T18:25:51-07:00 heroku[web.1]: Idling 2011-05-17T18:26:01-07:00 heroku[web.1]: State changed from up to down 2011-05-18T01:26:01+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T01:26:01+00:00 app[web.1]: Stopping ... 2011-05-18T01:26:02+00:00 heroku[web.1]: Process exited 2011-05-17T20:12:46-07:00 heroku[web.1]: Unidling 2011-05-17T20:12:47-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T03:12:48+00:00 heroku[web.1]: Starting process with command: thin -p 40055 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T03:12:49+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T03:12:49+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T03:12:49+00:00 app[web.1]: Listening on 0.0.0.0:40055, CTRL+C to stop 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=9954ms service=6ms bytes=565 2011-05-18T03:12:50+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:50] "GET /style.css HTTP/1.1" 200 - 0.0012 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-17T20:12:50-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T03:12:51+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:51] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T03:12:51+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET / HTTP/1.1" 200 293 0.0011 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET / HTTP/1.1" 200 293 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=4ms bytes=565 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET /style.css HTTP/1.1" 200 - 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T03:57:08+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:08] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-17T21:58:27-07:00 heroku[web.1]: Idling 2011-05-18T04:58:30+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T04:58:30+00:00 app[web.1]: Stopping ... 2011-05-18T04:58:30+00:00 heroku[web.1]: Process exited 2011-05-17T21:58:33-07:00 heroku[web.1]: State changed from up to down 2011-05-17T23:11:58-07:00 heroku[web.1]: Unidling 2011-05-17T23:11:58-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T06:12:00+00:00 heroku[web.1]: Starting process with command: thin -p 40091 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T06:12:01+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T06:12:01+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T06:12:01+00:00 app[web.1]: Listening on 0.0.0.0:40091, CTRL+C to stop 2011-05-18T06:12:01+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:01] "GET / HTTP/1.1" 200 293 0.0017 2011-05-18T06:12:02+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=3209ms service=5ms bytes=565 2011-05-18T06:12:03+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:03] "GET /style.css HTTP/1.1" 200 - 0.0019 2011-05-17T23:12:08-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T00:13:13-07:00 heroku[web.1]: Idling 2011-05-18T00:13:16-07:00 heroku[web.1]: State changed from up to down 2011-05-18T07:13:16+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T07:13:16+00:00 app[web.1]: Stopping ... 2011-05-18T07:13:17+00:00 heroku[web.1]: Process exited 2011-05-18T01:54:21-07:00 heroku[web.1]: Unidling 2011-05-18T01:54:21-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T08:54:23+00:00 heroku[web.1]: Starting process with command: thin -p 59491 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T08:54:24+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T08:54:24+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T08:54:24+00:00 app[web.1]: Listening on 0.0.0.0:59491, CTRL+C to stop 2011-05-18T01:54:28-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=6943ms service=6ms bytes=565 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /style.css HTTP/1.1" 200 - 0.0014 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T02:55:23-07:00 heroku[web.1]: Idling 2011-05-18T02:55:33-07:00 heroku[web.1]: State changed from up to down 2011-05-18T09:55:34+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T09:55:34+00:00 app[web.1]: Stopping ... 2011-05-18T09:55:34+00:00 heroku[web.1]: Process exited 2011-05-18T07:23:10-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T14:23:12+00:00 heroku[web.1]: Starting process with command: thin -p 20560 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T14:23:13+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T14:23:13+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T14:23:13+00:00 app[web.1]: Listening on 0.0.0.0:20560, CTRL+C to stop 2011-05-18T07:23:13-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=7ms bytes=565 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /style.css HTTP/1.1" 200 - 0.0015 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /favicon.ico HTTP/1.1" 404 18 0.0009 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T08:24:03-07:00 heroku[web.1]: Idling 2011-05-18T08:24:07-07:00 heroku[web.1]: State changed from up to down 2011-05-18T15:24:07+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T15:24:07+00:00 app[web.1]: Stopping ... 2011-05-18T17:34:27-07:00 heroku[web.1]: Unidling 2011-05-18T17:34:28-07:00 heroku[web.1]: State changed from created to starting 2011-05-19T00:34:29+00:00 heroku[web.1]: Starting process with command: thin -p 57621 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T17:34:31-07:00 heroku[web.1]: State changed from starting to up 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET / HTTP/1.1" 200 293 0.0016 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET /style.css HTTP/1.1" 200 - 0.0011 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-19T00:34:34+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-19T00:34:34+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:34] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T18:35:48-07:00 heroku[web.1]: Idling 2011-05-18T18:35:51-07:00 heroku[web.1]: State changed from up to down

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >