Search Results

Search found 127595 results on 5104 pages for 'http status code'.

Page 601/5104 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • CodePlex Daily Summary for Friday, October 04, 2013

    CodePlex Daily Summary for Friday, October 04, 2013Popular ReleasesMoreTerra (Terraria World Viewer): Version 1.11: Release Notes Release 1.11 =========== =Bug Fixes= =========== Now works with Terraria 1.2 wld files. =============== =Known Issues= =============== Not all tiles and items are accounted for. Missing tiles just show up as pink. This is actively being worked on but wanted to get a build out that works with 1.2VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Support for "ImageTeam.org linksStyleMVVM: 3.1.4: This release has virtually no code change but adds multiple new Item templates for the Windows Phone 8 platformSystem Center Orchestrator Community Project: Orchestrator Visio and Word Generator 1.5: This tool lets you export Orchestrator runbooks as a Visio diagram, and you can also generate an optional Word file as well. Components exported as of v1.5 are : - Title of the runbook - Activities and their names/thumbnails/description (description is displayed as a callout in the Visio diagram, attached to the shape of the activity) - Links and their names/colors - Looping and their interval Thumbnails and activities are grouped in the Visio diagram, for easy manipulation of the diagra...State of Decay Save Manager: Version 1.0.4: Add version at bottom of formDNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...SimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.RDFSharp - Start playing with RDF!: RDFSharp-0.6.6: GENERAL (NEW) Introduction of INT64 hashing engine (codenamed "Greta"); QUERY (FIX) Incorrect query evaluation due to faulty detection of optional patterns (v0.6.5 regression); (FIX) Missing update of PatternGroupID information after adding patterns and filters to a pattern group; (FIX) Ensure Context information of a pattern is not null before trying to collect it as variable; (MISC) Changed semantics of Context information of a pattern: if not provided, it will be ignored; (MISC...Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.C# Intellisense for Notepad++: Release v1.0.7.2: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1002September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0AudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.4 binaries: Improved performance of GroupByAcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...New ProjectsAnalysis Services Activity Viewer 2012: This project was created from a blog post I wrote back in February 2013 http://redphoenix.me/2013/08/22/upgrade-activity-viewer-2008-to-sql-server-2012/ARYSTA SYSTEM: Notebook System A/R Sales * Sales Order * Delivery * Sales Invoice * Sales Return * A/R Credit Memo * A/R Debit Memo A/P Purchasing * Purchase Order BEWELL SYSTEM: This system is design for Bewell-C Incorporated. Project Manager: Ben Penafiel System Engineer: Jhay Camba Report Designer: William Damasco Caching IOC: Caching IOC Container This will cache any Interface return results making it really easy to introduce caching to your solution.Grupo Anclita: Trabajo Final Laboratorio 4Halcyonic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/halcyonic/ ProPro: project about other projectsS3Unlock: Unlocks S3 agents that are stuck installing.sdfsdlfsdlkj01: dsfdffsdSerendipity - Responsive Skin for DNN: This is an HTML Template by Elemis, converted for use in DNN. Elemis URL: http://elemisfreebies.com/premium-themes/ Free for personal use and ed. purposes only.SmartSystemMenu: Smart system menu for you.SP 2013 Custom MultiTenant Adminstration: This Project creates a custom SP 2013 Tenant Admin site template covering limitations of existing tenant admin site.Telephasic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/telephasic/TelerikedIn: Social web app trying to look and feel like the famous LinkedInTerra 2: Generador de personajes para el juego de rol Terra2tsydev01: TextLineUpdateVds2465 Parser: This Project is about implementing a Parser for the Vds2465 protocol. It includes parsing and generating Vds2465 telegram bytes.Your Appliances: Empresa De ElectrodomesticosZeroFour by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/zerofour.

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of May 26

    - by Roxana Babiciu
    Calypso Technology announced that Calypso version 14 has achieved Oracle Exadata Optimized status through OPN. In simulations of data-intensive straight through-processing tasks, Calypso achieved performance gains of up to 500% using Exadata hardware – Read more Infosys achieves Oracle SuperCluster Optimized status with Finacle, a core banking solution. Finacle can process 6x the volume of transactions currently processed by the entire US banking system – Read more

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • .NET Reflector 7.2 Early Access Build 1 Released

    - by Bart Read
    I've just posted up full details of this release on the .NET Reflector blog at http://www.reflector.net/2011/05/life-is-a-journey-not-a-destination-net-reflector-7-2-ea-1-has-been-released/ and, breaking with previous tradition, this includes a fairly extensive changelog. You can download this EA build from the .NET Reflector homepage at http://www.reflector.net/. Enjoy! (And please don't forget to tell us what you think on the forum, http://forums.reflector.net/, using the tags #7.2 and #eap.)...(read more)

    Read the article

  • Godaddy multiple domain problem

    - by gayancc
    I have godaddy deluxe plan and here is my problem: I have two domains for example: e1.com and e2.com. Both are hosted in same hosting plan. First I created a folder for each domain in the root folder and uploaded two web site but when I'm trying to run my sites, the URL for e1 always shows http://e1.com/e1/ and for e2 it shows http://e2.com/e2. Can I avoid showing e1 and e2 folder and only show http://e1.com and http://e2.com? Thank you.

    Read the article

  • Godaddy multiple domain problem

    - by gayancc
    I have godaddy deluxe plan and here is my problem: I have two domains for example: e1.com and e2.com. Both are hosted in same hosting plan. First I created a folder for each domain in the root folder and uploaded two web site but when I'm trying to run my sites, the URL for e1 always shows http://e1.com/e1/ and for e2 it shows http://e2.com/e2. Can I avoid showing e1 and e2 folder and only show http://e1.com and http://e2.com? Thank you.

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • resolv.conf doesn't get set on reboot when networking is configured for static ip

    - by kenneth koontz
    I'm experiencing, what appears to be a hostname resolution issue in ubuntu 12.04 server edition when configuring my computer to use a static ip. In /etc/network/interfaces: # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.28 netmask 255.255.255.0 gateway 192.168.1.1 Running $ sudo apt-get upgrade, results in a 'Failed to fetch...': . . . W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_US Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Some index files failed to download. They have been ignored, or old ones used instead. When I change my /etc/network/interfaces to: auto eth0 iface eth0 inet dhcp Everything works fine. Looking into /etc/resolv.conf provides some more hints...In cases where I was getting the resolving issue, resolve.conf was empty. No nameservers were specified. When I changed to dhcp from static and restarted networking. /etc/resolv.conf gets written to: 'nameserver 192.168.1.1'. Switching back from dhcp to static and restarting doesn't remove the nameserve entry. When I restart the system with static set, resolv.conf is empty. When I restart the system with dhcp set, resolv.conf has nameserver 192.168.1.1. So it appears that the issue is that resolve.conf is not getting written to correctly? Which package/code is responsible for writing to resolv.conf? Is there a particular package that I can take a look at open issues? UPDATE: istream posted a good article discussing changes to resolve.conf in 12.04. http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/

    Read the article

  • CodePlex Daily Summary for Sunday, April 25, 2010

    CodePlex Daily Summary for Sunday, April 25, 2010New Projects281slides: 281slides is a project to demonstrate how one could go about implementing something similar to http://280slides.com in Silverlight3.Alex.XP's ARMA2 Chinese Language Pack Tools: Alex.XP's ARMA2 Chinese Language ToolsAuto Version Web Assets: The AVWA project is an HTTP Module written in C# that is designed to allow for versioning of various web assets such as .CSS and .JS files. This a...CAECE Twitter Clon: Proyecto para hacer un clon de twitter alumnos CAECE 2010DNSExchanger: Provides users to switch their PC's DNSs with pre-defined DNS with one click. Fluent ViewModel Configuration for WPF (MVVM): Fluent MVVM Configuration for WPF. A powerful yet simple interface for configuring view models for WPF. Eliminates INotifyPropertyChanged duplic...Genetic Algorithm N-Queens Solver: Genetic Algorithm N-Queens Solver with Multithreaded GUI.Hangmanondotnet: Just a starterHelium Frog Animator: Here is the Source code for the Helium Frog Animator. It is released under the GNU General Public Licence. The software enables stop motion animati...LISCH Collision Resolution, AVL Trees: LISCH Collision Resolution, AVL Trees Last Insertion Standart Colesced HashingNetPE: NetPE is a Portable Executable(PE) editor with full Metadata support. It is developed in pure C#.Proyecto Nilo: nada por ahoraSQL Schema Source Control: Track database schema changes automatically C# application that you can run against your SQL Databases (supports SQL 2008 right now, but you cou...uTorrent-Net-Client: A network client for uTorrent over the uTorrent-WebAPI. The Client use the API implementation from "uTorrent Web Client API Wrapper Library" (http:...Visual Leak Detector for Visual C++ 2008/2010: Enhanced Memory Leak Detection for Visual C++Visual Studio 2010 AutoScroller Extension: This is an extension to provide auto-scrolling to the Visual Studio 2010 environment. Simply middle click and drag the mouse in the direction yo...Vje: Vje projectVs2010-TipSite - Enter Island: This project is a visual studio 2010 project created in Silverligt. The project used to give using tips about visual studio 2010 by movies clips an...WKURM: Research Methods project @ western kentucky universityYupsky: yupsky webNew Releases.NET DiscUtils: Version 0.8: This is the 0.8 release of DiscUtils. New in this release are: An NFS client, supporting access to virtual disks held on an NFS server. A PowerS...Bluetooth Radar: Version 2.2: Add Settings window Get installed services on the deveice Check if Object Exchange is installed and changed properties. Add Windows Bluetooth...CSharp Intellisense: V1.7: major improvements: - Select best suggestion - on going changes filters (the filters will changed according to the current typing) - remember last ...DNSExchanger: DNSExchanger Beta v0.1: First release of the project, DNSExchanger. It requires, 32-bit Operating System (XP, Vista, 7) and need to be runned with administration credent...DotNetNuke® Form and List (formerly User Defined Table): 05.01.03: Form and List 05.01.03What's New: This release, Form and List 05.01.03, will be a stabilization release. It requires at least DotNetNuke 5.1.3 for...Enki Char 2 BIN: Enki Char 2 Bin: This program converts Characters to Binary and vice versaFluent ViewModel Configuration for WPF (MVVM): FluentViewModel Alpha1: This is a debug build of the FluentViewModel library. This has been provided to get feed back on the API and to look for bugs. For an example on h...Hangmanondotnet: Hangman: Just a previewHelium Frog Animator: Helium Frog 2.06 Documentation: Complete User Guide documentation in html formatHelium Frog Animator: Helium Frog 2.06 Source Code: Zip file contains all Visual Basic 6 source code, Artwork, sound files etc.Helium Frog Animator: Helium Frog Version 2.06: This file is the released version on Helium Frog 2.06. It contains binary files and required runtime libraries.Helium Frog Animator: Motion Jpeg Handling 10: Source code , module and debugging application in C# a) Module concatenates .jpg files to motion jpeg .avi file. b) Module retrieves any required ...Helium Frog Animator: Sample Grabber 03: Source code and debug program in C# a) Module lists all the available DirextX source devices b) Sets up video streaming to a picturebox by creating...Henge3D Physics Library for XNA: Henge3D Source (2010-04): The biggest change in this release was the addition of the OnCollision and OnSeparation "events" in the RigidBody class. An attached handler will r...HouseFly controls: HouseFly controls alpha 0.9.4.1: HouseFly controls release 0.9.4.1 alphaHTML Ruby: 6.22.0: Added new options for adjusting ruby line height and text line height Live preview for options Adjusted applied styles Added option to report...HTML Ruby: 6.22.1: space by word if ASCII character improved handling of unclosed ruby tagMultiwfn: Multiwfn1.3_binary: Multiwfn1.3_binaryMultiwfn: multiwfn1.3_source: multiwfn1.3_sourceRapid Dictionary: Rapid Dictionary Alpha 1.0: Try auto updatable version: http://install.rapiddict.com/index.html Rapid Dictionary Alpha 1.0 includes such functionality:you can run translation...Silverlight Input Keyboard: Version 1.5 for Silverlight 4: Dependency System.Windows.Interactivity.dll from Blend 4 RC http://www.microsoft.com/downloads/details.aspx?FamilyID=88484825-1b3c-4e8c-8b14-b05d02...SQL Schema Source Control: 1.0: Initial ReleaseUDC indexes parser: UDC indexex parser Beta: LALR(1): 1) Невозможно использовать знак распространения на общие и специальные определители, за исключением определителей в скобках (), (0), (=), ...uTorrent-Net-Client: uTorrent-Net-Client: This download contains the uTorrentNetClient and the 7Zip Windows-Service. Before you can use both, you must configuration some points in the App.C...VidCoder: 0.3.0: Changes: Added customizable columns on the Queue. Right click->Customize columns, then drag and drop to choose and reorder. Column sizes will also...Visual Leak Detector for Visual C++ 2008/2010: v2.0: New version of VLD. This adds support for x64 applications and VS 2010.Visual Studio 2010 AutoScroller Extension: AutoScroller v0.1: Initial release of Visual studio 2010 auto-scroller extension. Simply middle click and drag the mouse in the direction you wish to scroll, further...Yasbg: It's Static GUI: Many changes have been made from the previous release. Read the README! This release adds a GUI and RSS support. From now on, this program is only...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitSilverlight ToolkitMicrosoft SQL Server Product Samples: Databasepatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrGMap.NET - Great Maps for Windows Forms & PresentationBlogEngine.NETParticle Plot PivotNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDotNetZip LibraryN2 CMSFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Smartsoft's OCEAN Payment Processing Solution achieves Oracle Exadata Optimized status. "Performance is the most important issue for our success in the market and running OCEAN on the Oracle Exadata Database Machine provides customers with extreme performance.” – Learn more Banking solution FORBIS Ltd’s FORPOST achieves Oracle Exadata, Exalogic and SuperCluster Ready Status. “We are glad to offer our current and future customers the newest features provided by Oracle Engineered Systems to achieve maximum reliability and speed operation.” – Learn more

    Read the article

  • Disk errors on tty and syslog/dmesg

    - by Shoaibi
    Recently I have started to get a lot of these errors: Jun 18 08:57:42 abacus kernel: [ 401.554292] ata5: SError: { HostInt 10B8B } Jun 18 08:57:42 abacus kernel: [ 401.559346] sr 4:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Jun 18 08:57:42 abacus kernel: [ 401.560191] ata5.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Jun 18 08:57:42 abacus kernel: [ 401.560231] res 51/20:03:00:00:00/00:00:00:00:00/a0 Emask 0x40 (internal error) Jun 18 08:57:42 abacus kernel: [ 401.575310] ata5.00: status: { DRDY ERR } Jun 18 08:57:42 abacus kernel: [ 401.579801] ata5: hard resetting link Jun 18 08:57:42 abacus kernel: [ 401.929320] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 18 08:57:42 abacus kernel: [ 401.941936] ata5.00: configured for UDMA/100 Jun 18 08:57:42 abacus kernel: [ 401.969426] ata5: EH complete Jun 18 08:57:54 abacus kernel: [ 413.527699] ata5.00: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x6 Jun 18 08:57:54 abacus kernel: [ 413.527779] ata5.00: irq_stat 0x40000001 Jun 18 08:57:54 abacus kernel: [ 413.527822] ata5: SError: { HostInt 10B8B } Jun 18 08:57:54 abacus kernel: [ 413.527901] sr 4:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Jun 18 08:57:54 abacus kernel: [ 413.528103] ata5.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Jun 18 08:57:54 abacus kernel: [ 413.528142] res 51/20:03:00:00:00/00:00:00:00:00/a0 Emask 0x40 (internal error) Jun 18 08:57:54 abacus kernel: [ 413.528184] ata5.00: status: { DRDY ERR } Jun 18 08:57:54 abacus kernel: [ 413.528303] ata5: hard resetting link Jun 18 08:57:54 abacus kernel: [ 413.875894] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 18 08:57:54 abacus kernel: [ 413.888267] ata5.00: configured for UDMA/100 Jun 18 08:57:54 abacus kernel: [ 413.916365] ata5: EH complete Jun 18 08:57:56 abacus kernel: [ 415.537834] ata5.00: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x6 Jun 18 08:57:56 abacus kernel: [ 415.545253] ata5.00: irq_stat 0x40000001 Jun 18 08:57:56 abacus kernel: [ 415.549788] ata5: SError: { HostInt 10B8B } Jun 18 08:57:56 abacus kernel: [ 415.554840] sr 4:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Jun 18 08:57:56 abacus kernel: [ 415.555201] ata5.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Jun 18 08:57:56 abacus kernel: [ 415.555242] res 51/20:03:00:00:00/00:00:00:00:00/a0 Emask 0x40 (internal error) Jun 18 08:57:56 abacus kernel: [ 415.570483] ata5.00: status: { DRDY ERR } Jun 18 08:57:56 abacus kernel: [ 415.574695] ata5: hard resetting link Jun 18 08:57:56 abacus kernel: [ 415.924954] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 18 08:57:56 abacus kernel: [ 415.936831] ata5.00: configured for UDMA/100 Jun 18 08:57:56 abacus kernel: [ 415.965001] ata5: EH complete Jun 18 08:58:02 abacus kernel: [ 421.529784] ata5.00: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x6 Jun 18 08:58:02 abacus kernel: [ 421.529904] ata5.00: irq_stat 0x40000001 Jun 18 08:58:02 abacus kernel: [ 421.530023] ata5: SError: { HostInt 10B8B } Jun 18 08:58:02 abacus kernel: [ 421.530104] sr 4:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Jun 18 08:58:02 abacus kernel: [ 421.530425] ata5.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Jun 18 08:58:02 abacus kernel: [ 421.530466] res 51/20:03:00:00:00/00:00:00:00:00/a0 Emask 0x40 (internal error) Jun 18 08:58:02 abacus kernel: [ 421.530583] ata5.00: status: { DRDY ERR } Jun 18 08:58:02 abacus kernel: [ 421.530705] ata5: hard resetting link Jun 18 08:58:02 abacus kernel: [ 421.873218] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 18 08:58:02 abacus kernel: [ 421.885040] ata5.00: configured for UDMA/100 Jun 18 08:58:02 abacus kernel: [ 421.913404] ata5: EH complete Are these critical error messages? What would be the cause and remedy?

    Read the article

  • How to enable services Discovery API in GoogleCL?

    - by Marcos
    There are bits and pieces of information all over the place but I'm trying to put it all together so that GoogleCL finally accesses more than the initial 7 services. Does anyone know of a step-by-step? Right now any attempt outside these result in the error message: google tasks list Did you specify the service correctly? Must be one of 'picasa', 'blogger', 'youtube', 'docs', 'contacts', 'calendar', 'finance' I installed GoogleCL from the Ubuntu repos, authenticated a few bundled services like contacts, docs etc. and those work great, giving me access to do certain operations like upload from the command line. I would really like to get it going to support tasks and all the other elegible Google services shown at https://code.google.com/apis/explorer/#_s=tasks Here are some guides/partial steps I've found: http://code.google.com/p/googlecl/wiki/DiscoveryManual (indicates needing to check it out updated GoogleCL from the subversion repository.) http://code.google.com/p/google-api-python-client/wiki/Installation easy_install --upgrade google-api-python-client http://code.google.com/p/googlecl/wiki/Install http://code.google.com/p/googlecl/source/checkout sudo -i cd /usr/local/src/ svn checkout http://googlecl.googlecode.com/svn/trunk/ googlecl-read-only cat googlecl-read-only/INSTALL.txt cd /usr/local/src/googlecl-read-only/ python setup.py install Result: $ google discovery list Traceback (most recent call last): File "/usr/bin/google", line 488, in run_interactive run_once(options, args) File "/usr/bin/google", line 540, in run_once options.config) File "/usr/bin/google", line 364, in import_service force_gdata_v1 = config.lazy_get(package.SECTION_HEADER, AttributeError: 'module' object has no attribute 'SECTION_HEADER'

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Very very slow transfer speeds between Windows 7 and samba server running on Ubuntu 11.10/12.04 minimal

    - by kuzyt
    As mentioned in the title I tried transferring files between Windows 7 and the samba server running on both Ubuntu 11.10 and 12.04 but both showed very slow transfer speeds. Can someone please guide me in the right direction to debug this problem ? wget --output-document=/dev/null http://tokyo1.linode.com/100MB-tokyo.bin --2012-08-21 22:02:17-- http://tokyo1.linode.com/100MB-tokyo.bin Resolving tokyo1.linode.com (tokyo1.linode.com)... 106.187.33.12 Connecting to tokyo1.linode.com (tokyo1.linode.com)|106.187.33.12|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: `/dev/null' 8% [=============> ] 8,923,980 64.8K/s eta 15m 0s wlan0 IEEE 802.11abgn ESSID:"TNET" Mode:Managed Frequency:2.462 GHz Access Point: 58:6D:8F:26:20:7A Bit Rate=117 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=57/70 Signal level=-53 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:101 Invalid misc:2448 Missed beacon:0 03:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01) Subsystem: Atheros Communications Inc. Device 3112 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at fea00000 (64-bit, non-prefetchable) [size=128K] Expansion ROM at fea20000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/4 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [300 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • CodePlex Daily Summary for Tuesday, September 04, 2012

    CodePlex Daily Summary for Tuesday, September 04, 2012Popular ReleasesPE file reader: READPE-e9ff717a638d.zip: Introduced some new code which parses the IMAGENTHEADERS. At the moment the command line options dosheader and imagentheaders are working and and example of their usage can be... D:\>readpe pe-files\main.exe dosheader imagentheadersMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.64: Another attempt to fix the fiasco that was my bad decision to rename the DLLs to get away from a strong-name collision that was causing lots of problems for me. Too many existing projects expected AjaxMin.dll, and lots of things broke downstream from me. This release keeps the .net 2.0 version named AjaxMin.dll. The new .net 3.5 and .net 4.0 versions are named AjaxMinLibrary.dll. If an existing project is expecting the old name, they should continue to pick up the .net 2.0 version (since the ...Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83NWebsec: NWebsec 1.0.3: This release fixes two bugs in the NWebsec.Mvc package. Go get it on NuGet! http://nuget.org/packages/NWebsec.Mvc/ These work items made it into the release: 9 10 Check out the Documentation to learn how it works. This release has been tagged v1.0.3 in source control. Enjoy!RBAC Manager R2 for Exchange 2010 SP2, Exchange 2013 Preview and Office 365: RBAC Manager R2 1.5.5.0: now supports to manage RBAC on Office 365 'remember password' feature now saves the password as encrypted as opposed to plain-text format in version 1.5.0.0 DPAPI is used to encrypt the saved password; for more information about DPAPI please check: Managed DPAPI Part I: ProtectedData http://blogs.msdn.com/b/shawnfa/archive/2004/05/05/126825.aspx The tool requires HTTP/HTTPS network connection to the Exchange server Known Bugs: Active Directory lookup is not working remotely and crashes the ...WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.Dynamics AX Build Scripts: AX TFS Build Library Beta - v0.2.0.0: Beta release of TFS 2010 workflow code activities for AX 2009 and AX 2012. Build template for AX 2012 included. There is one refactor of code that will break your existing workflows. The AOS workflow step to stop/start AOS now expects the actual windows service name, not the port number of the AOS. There now is a new step to retrieve server settings, which can get the service identifier based on the port number. The registry has to be read to retrieve these settings, and we didn't want to ke...Cosmo OS: Cosmo OS Lama Preview: Info Sulla Release Lama Preview ( Prima Preview Pubblica ) Data Di Rilascio: 2 / 09 / 12 Build: 1950 Ramo Di Sviluppo: cosmo_os.preview.lama.bid1535543 Tipo Release: Stable / PreviewNETDeob0: NETDeob 0.3.1 BETA binaries: 0.3.1Custom Captcha Plugin for Kooboo CMS for adding content or sending feedback: Custom Captcha Validator Plugin v1.1 for Kooboo: Download file CustomCaptchaValidatorPlugin.dll and install it to KooBoo CMS. Release 1.1: Fixed error: "A generic error occurred in GDI+" (if hosting is less than Windows Server 2008 or Windows 7) - http://forum.kooboo.com/yafpostsm6602Custom-captcha---any-best-practice.aspx#post6602TSQL Code Smells Finder: POC 1.01: Proof of concept 1.01 TSQLDomTest.ps1 and Errors.Txt are requiredSaturn Kinect: Saturn Kinect + Sample Applications - Release 3: This release includes : - Saturn Kinect Library - Kinect Motion Capture Application - Controlling mouse cursor sample - Hand swip detection sample + Source CodesDiscuzViet: DiscuzX2.5_02092012_Vietnam: DiscuzX2.502092012VietnamBookmark Collector: 01.00.00: This is the first release with a minimal feature set. You can save, edit, delete, and display a list of URLs from various sites.EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...New ProjectsAction Bar: Action Bar is a SNS network based on activities.async/await C# Samples: Project demonstrating new C# feature - async and await. You can find here several solutions to make UI calls asynchronous: APM, EAP and async.Attribute Based Xaml Generator: Dynamic Xaml UI Generator and Editor Just point it to a dll or an exe and then navigate through your namespaces to your classB INI Sharp Library: Full support for INI files.brevis nopCommerce Extensions: Extensions for nopCommerce open soruce e-commerce solution (several Versions). !Contents nop 1.90 fpr testing the new Lib! This will be removed after 1. releaseConEmu - Windows console with tabs: ConEmu (short for Console Emulator) is a console window and tabbed environment for Windows. Tabs, Fonts, Quake style, Transparency and hundreds of other optionsContrib.Mod.AccountWidgets: Orchard module for adding login and registration widgetsCSharp GUI for Mono: This is an application which makes use of "Mono" to execute CSharp programs. It provides a graphical user interface to run the CSharp program. DocCollection: ???????????????http://www.qlili.comfanpages: Fanatics!ISIS Associations Manager: Application Web permettant la gestion d'Associations. (Membres, Emailing, Calendier)LogMan: ????????????b/s??,??python?? require: web.pyLucifure Stash - Azure Table Storage Client: Lucifure Stash is an alternate Azure table storage client, which supports arrays, enumerations, large data > 64KB, serialization, morphing and more.Mod.EverlastingLogin: Orchard module to allow a user to stay logged in for a certain amount of time using cookiesPHP Extra Functions: PHP Extra Functions is a suite of functions that extend common libraries with easy to use functions. For example, functions are added to MySQLi to simplify use.Raise events controlled: This is an example of raising you events controlled (with exception handling)sb0t v.5: sb0t 5 development page.SharePoint Mobile OA Platform: Via mobile device, by using the SharePoint Mobile Support, Web Service, Client OM, WCF Data Service bring about mobile office.Simple Guestbook: I just want to share simple code, may be will be helpful for newbies.SimplyWeather2.gadget: A neat little weather gadget for your Windows Desktop.Tanks: Required summary is hereUtility Project: Utilities ProjectWindows Phone: The goal of this project is to improve my skills in Windows Phonewords: ?????????Wpf Testing Lib: This is a project for auto testing wpf appswtstudy: wtstudy

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • WSDL-world vs CLR-world – some differences

    - by nmarun
    A change in mindset is required when switching between a typical CLR application and a web service application. There are some things in a CLR environment that just don’t add-up in a WSDL arena (and vice-versa). I’m listing some of them here. When I say WSDL-world, I’m mostly talking with respect to a WCF Service and / or a Web Service. No (direct) Method Overloading: You definitely can have overloaded methods in a, say, Console application, but when it comes to a WCF / Web Services application, you need to adorn these overloaded methods with a special attribute so the service knows which specific method to invoke. When you’re working with WCF, use the Name property of the OperationContract attribute to provide unique names. 1: [OperationContract(Name = "AddInt")] 2: int Add(int arg1, int arg2); 3:  4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); By default, the proxy generates the code for this as: 1: [System.ServiceModel.OperationContractAttribute( 2: Action="http://tempuri.org/ILearnWcfService/AddInt", 3: ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute( 7: Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", 8: ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 9: double AddDouble(double arg1, double arg2); With Web Services though the story is slightly different. Even after setting the MessageName property of the WebMethod attribute, the proxy does not change the name of the method, but only the underlying soap message changes. 1: [WebMethod] 2: public string HelloGalaxy() 3: { 4: return "Hello Milky Way!"; 5: } 6:  7: [WebMethod(MessageName = "HelloAnyGalaxy")] 8: public string HelloGalaxy(string galaxyName) 9: { 10: return string.Format("Hello {0}!", galaxyName); 11: } The one thing you need to remember is to set the WebServiceBinding accordingly. 1: [WebServiceBinding(ConformsTo = WsiProfiles.None)] The proxy is: 1: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloGalaxy", 2: RequestNamespace="http://tempuri.org/", 3: ResponseNamespace="http://tempuri.org/", 4: Use=System.Web.Services.Description.SoapBindingUse.Literal, 5: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 6: public string HelloGalaxy() 7:  8: [System.Web.Services.WebMethodAttribute(MessageName="HelloGalaxy1")] 9: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloAnyGalaxy", 10: RequestElementName="HelloAnyGalaxy", 11: RequestNamespace="http://tempuri.org/", 12: ResponseElementName="HelloAnyGalaxyResponse", 13: ResponseNamespace="http://tempuri.org/", 14: Use=System.Web.Services.Description.SoapBindingUse.Literal, 15: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 16: [return: System.Xml.Serialization.XmlElementAttribute("HelloAnyGalaxyResult")] 17: public string HelloGalaxy(string galaxyName) 18:  You see the calling method name is the same in the proxy, however the soap message that gets generated is different. Using interchangeable data types: See details on this here. Type visibility: In a CLR-based application, if you mark a field as private, well we all know, it’s ‘private’. Coming to a WSDL side of things, in a Web Service, private fields and web methods will not get generated in the proxy. In WCF however, all your operation contracts will be public as they get implemented from an interface. Even in case your ServiceContract interface is declared internal/private, you will see it as a public interface in the proxy. This is because type visibility is a CLR concept and has no bearing on WCF. Also if a private field has the [DataMember] attribute in a data contract, it will get emitted in the proxy class as a public property for the very same reason. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: private int _x; 6:  7: [DataMember] 8: public int Id { get; set; } 9:  10: [DataMember] 11: public string FirstName { get; set; } 12:  13: [DataMember] 14: public string Header { get; set; } 15: } 16: } See the ‘_x’ field is a private member with the [DataMember] attribute, but the proxy class shows as below: 1: [System.Runtime.Serialization.DataMemberAttribute()] 2: public int _x { 3: get { 4: return this._xField; 5: } 6: set { 7: if ((this._xField.Equals(value) != true)) { 8: this._xField = value; 9: this.RaisePropertyChanged("_x"); 10: } 11: } 12: } Passing derived types to web methods / operation contracts: Once again, in a CLR application, I can have a derived class be passed as a parameter where a base class is expected. I have the following set up for my WCF service. 1: [DataContract] 2: public class Employee 3: { 4: [DataMember(Name = "Id")] 5: public int EmployeeId { get; set; } 6:  7: [DataMember(Name="FirstName")] 8: public string FName { get; set; } 9:  10: [DataMember] 11: public string Header { get; set; } 12: } 13:  14: [DataContract] 15: public class Manager : Employee 16: { 17: [DataMember] 18: private int _x; 19: } 20:  21: // service contract 22: [OperationContract] 23: Manager SaveManager(Employee employee); 24:  25: // in my calling code 26: Manager manager = new Manager {_x = 1, FirstName = "abc"}; 27: manager = LearnWcfServiceClient.SaveManager(manager); The above will throw an exception saying: In short, this is saying, that a Manager type was found where an Employee type was expected! Hierarchy flattening of interfaces in WCF: See details on this here. In CLR world, you’ll see the entire hierarchy as is. That’s another difference. Using ref parameters: * can use ref for parameters, but operation contract should not be one-way (gives an error when you do an update service reference)   => bad programming; create a return object that is composed of everything you need! This one kind of stumped me. Not sure why I tried this, but you can pass parameters prefixed with ref keyword* (* terms and conditions apply). The main issue is this, how would we know the changes that were made to a ‘ref’ input parameter are returned back from the service and updated to the local variable? Turns out both Web Services and WCF make this tracking happen by passing the input parameter in the response soap. This way when the deserializer does its magic, it maps all the elements of the response xml thereby updating our local variable. Here’s what I’m talking about. 1: [WebMethod(MessageName = "HelloAnyGalaxy")] 2: public string HelloGalaxy(ref string galaxyName) 3: { 4: string output = string.Format("Hello {0}", galaxyName); 5: if (galaxyName == "Andromeda") 6: { 7: galaxyName = string.Format("{0} (2.5 million light-years away)", galaxyName); 8: } 9: return output; 10: } This is how the request and response look like in soapUI. As I said above, the behavior is quite similar for WCF as well. But the catch comes when you have a one-way web methods / operation contracts. If you have an operation contract whose return type is void, is marked one-way and that has ref parameters then you’ll get an error message when you try to reference such a service. 1: [OperationContract(Name = "Sum", IsOneWay = true)] 2: void Sum(ref double arg1, ref double arg2); 3:  4: public void Sum(ref double arg1, ref double arg2) 5: { 6: arg1 += arg2; 7: } This is what I got when I did an update to my service reference: Makes sense, because a OneWay operation is… one-way – there’s no returning from this operation. You can also have a one-way web method: 1: [SoapDocumentMethod(OneWay = true)] 2: [WebMethod(MessageName = "HelloAnyGalaxy")] 3: public void HelloGalaxy(ref string galaxyName) This will throw an exception message similar to the one above when you try to update your web service reference. In the CLR space, there’s no such concept of a ‘one-way’ street! Yes, there’s void, but you very well can have ref parameters returned through such a method. Just a point here; although the ref/out concept sounds cool, it’s generally is a code-smell. The better approach is to always return an object that is composed of everything you need returned from a method. These are some of the differences that we need to bear when dealing with services that are different from our daily ‘CLR’ life.

    Read the article

  • 301 redirect from "/index.html" to root if index.html not exist

    - by Andrij Muzychka
    Can I create 301 redirect from "index.html" to root directory if file "index.html" not exist? For example: link "http://example.com/index.html" show "404 Error" page. I need 301 redirect to root directory: "http://example.com/" in .htaccess I add rule: Options +FollowSymLinks RewriteCond %{THE_REQUEST} ^.*/index.html RewriteRule ^(.*)index.html$ http://example.com/$1 [R=301,L] but it doesn't work. Can you help me solve this problem?

    Read the article

  • LINQ to Twitter v2.0.8 Released

    - by Joe Mayo
    Today, I released LINQ to Twitter v2.0.8. Besides normal maintenance, this release includes the Twitter Geo API and the Suggested Users API. LINQ to Twitter is hosted on CodePlex.com: http://linqtotwitter.codeplex.com/ In addition to new functionality, I've made much progress toward LINQ to Twitter documentation; primarily in the Making API Calls area: http://linqtotwitter.codeplex.com/wikipage?title=Making%20API%20Calls&referringTitle=Documentation There's also a discussion forum where you can ask and view questions: http://linqtotwitter.codeplex.com/Thread/List.aspx As always, constructive feedback is welcome. Joe

    Read the article

  • svn checkout through proxy doesn't work

    - by Hoghweed
    I'm on an ubuntu 11.04 x64 I'm trying to svn checkout trough a proxy, I edited the servers file to correctly set the http proxy informations to correctly establish a connection, but I'm still having errors and the checkout it's not possible. this is the error: svn checkout http://75.101.130.236/svn/mspdd/ svn: OPTIONS of 'http://75.101.130.236/svn/mspdd': Could not authenticate to proxy server: ignored Kerberos challenge, ignored NTLM challenge, GSSAPI authentication error: Unspecified GSS failure. Minor code may provide more information: Credentials cache file '/tmp/krb5cc_1000' not found (http://75.101.130.236) Trying to access through browser works well but not from terminal.. any idea? thanks a lot

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >