Search Results

Search found 29101 results on 1165 pages for 'thread local storage'.

Page 225/1165 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Please help to clean up my RoR development environment

    - by PeterWong
    I started RoR development a few months ago, and being new to Mac... Time flies and now I have a lot different ruby versions, rails versions and gems versions located everywhere......And currently I installed rvm and things got even worst, all things messed! And so I started want to clean all things and use rvm again! I want to uninstall all gems, all rails, and all ruby versions, except the system's default one (the very old one born with the mac). Or any other better solutions or suggestions!? Please help! there is some info that I think will be useful: which -a ruby /opt/local/bin/ruby /opt/local/bin/ruby /usr/local/bin/ruby /usr/bin/ruby /usr/local/bin/ruby which -a rails /usr/local/bin/rails /usr/bin/rails /usr/local/bin/rails which -a compass # simliar for rspec and many other gems /usr/local/bin/compass /usr/local/bin/compass gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) addressable (2.2.2) arel (2.0.6, 1.0.1, 1.0.0.rc1) authlogic (2.1.6, 2.1.3) aws-s3 (0.6.2) base32 (0.1.2) block_helpers (0.3.3) bluecloth (2.0.9) bowline (0.9.4) bowline-bundler (0.0.4) bson (1.1.2) builder (2.1.2) bundler (1.0.2, 1.0.0) compass (0.10.6) crack (0.1.7) devise (1.1.3) diff-lcs (1.1.2) differ (0.1.1) dynamic_form (1.1.3) engineyard (1.3.1) engineyard-serverside-adapter (1.3.3) erubis (2.6.6) escape (0.0.4) extlib (0.9.15) facebooker (1.0.75) faker (0.3.1) faraday (0.5.3, 0.5.2) fast_gettext (0.5.10, 0.4.17) fastercsv (1.5.3) fastthread (1.0.7) ffi (0.6.3) formatize (1.0.1) formtastic (1.1.0, 1.0.1) gemcutter (0.5.0) gettext (2.1.0) git (1.2.5) gosu (0.7.25 universal-darwin) haml (3.0.24, 3.0.23, 3.0.22, 3.0.21, 3.0.18) haml-rails (0.3.4) heroku (1.10.13, 1.9.13) highline (1.5.2) hirb (0.3.4, 0.3.3) hpricot (0.8.2) i18n (0.5.0, 0.4.2, 0.4.1, 0.3.7) jeweler (1.4.0) json (1.4.6) json_pure (1.4.3) linkedin (0.1.8) locale (2.0.5) mail (2.2.12, 2.2.11, 2.2.10, 2.2.9, 2.2.7, 2.2.6.1) memcache-client (1.8.5) meta_search (0.9.8, 0.9.7.2, 0.9.7.1, 0.9.6, 0.9.4) mime-types (1.16) mongo (1.1.2) mongoid (2.0.0.beta.20) multi_json (0.0.5) multipart-post (1.0.1) mysql (2.8.1) mysql2 (0.2.6, 0.2.4, 0.2.3) net-ldap (0.1.1) nice-ffi (0.4) nokogiri (1.4.4, 1.4.2) oa-basic (0.1.6) oa-core (0.1.6) oa-enterprise (0.1.6) oa-oauth (0.1.6) oa-openid (0.1.6) oauth (0.4.4, 0.4.3, 0.4.1) oauth-plugin (0.4.0.pre1) oauth2 (0.1.0) omniauth (0.1.6) paperclip (2.3.6, 2.3.4, 2.3.1.1) passenger (2.2.12) polyglot (0.3.1) pyu-ruby-sasl (0.0.3.2) querybuilder (0.9.2, 0.5.9) rack (1.2.1, 1.1.0, 1.0.1) rack-cache (0.5.3) rack-cache-purge (0.0.2, 0.0.1) rack-mount (0.6.13) rack-openid (1.2.0) rack-test (0.5.6, 0.5.4) railroady (0.11.2) rails (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) RedCloth (3.0.4) rest-client (1.6.1) roxml (3.1.5) rscribd (1.2.0) rspec (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-core (2.3.0, 2.2.1, 2.1.0, 2.0.1) rspec-expectations (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-mocks (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-rails (2.3.0, 2.2.0, 2.1.0, 2.0.1) ruby-hmac (0.4.0) ruby-mysql (2.9.3) ruby-ole (1.2.10.1) ruby-openid (2.1.8) ruby-openid-apps-discovery (1.2.0) ruby-recaptcha (1.0.2, 1.0.0) ruby-sdl-ffi (0.3) ruby-termios (0.9.6) ruby_parser (2.0.5) rubyforge (2.0.4) rubygame (2.6.4) rubygems-update (1.3.7) rubyless (0.7.0, 0.6.0, 0.3.5) rubyntlm (0.1.1) rubyzip2 (2.0.1) scribd_fu (2.0.6) searchlogic (2.4.27, 2.4.23) sequel (3.16.0, 3.15.0, 3.13.0) sexp_processor (3.0.5) shoulda (2.11.3) sinatra (1.0) slim (0.8.0) slim-rails (0.1.2) spreadsheet (0.6.4.1) sqlite3-ruby (1.3.2, 1.3.1) ssl_requirement (0.1.0) subdomain-fu (1.0.0.beta2, 0.5.4) supermodel (0.1.4) syntax (1.0.0) taps (0.3.13, 0.3.11) templater (1.0.0) temple (0.1.6) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.4, 0.14.3, 0.14.1, 0.14.0) tilt (1.1) treetop (1.4.9, 1.4.8) tzinfo (0.3.23) uuidtools (2.1.1, 2.0.0) validates_timeliness (3.0.0.beta.4, 2.3.1) warden (0.10.7) will_paginate (3.0.pre2, 2.3.15, 2.3.14) xml-simple (1.0.12) ya2yaml (0.30) yajl-ruby (0.7.8, 0.7.7) yamltest (0.7.0) zena (0.16.9, 0.16.8) ====== I have ran sudo rvm implode and sudo rm -rf ~/.rvm, so no rvm now. gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /Users/peter/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com === ls -al /usr/local/lib/ total 5704 drwxr-xr-x 7 root wheel 238 Jun 1 2010 . drwxr-xr-x 9 root wheel 306 Dec 15 16:20 .. -rw-r--r-- 1 root wheel 1717208 Jun 1 2010 libruby-static.a -rwxr-xr-x 1 root wheel 1191880 Jun 1 2010 libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.1.8.dylib -> libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.dylib -> libruby.1.8.7.dylib drwxr-xr-x 6 root wheel 204 Jun 1 2010 ruby

    Read the article

  • Code runs 6 times slower with 2 threads than with 1

    - by Edward Bird
    So I have written some code to experiment with threads and do some testing. The code should create some numbers and then find the mean of those numbers. I think it is just easier to show you what I have so far. I was expecting with two threads that the code would run about 2 times as fast. Measuring it with a stopwatch I think it runs about 6 times slower! void findmean(std::vector<double>*, std::size_t, std::size_t, double*); int main(int argn, char** argv) { // Program entry point std::cout << "Generating data..." << std::endl; // Create a vector containing many variables std::vector<double> data; for(uint32_t i = 1; i <= 1024 * 1024 * 128; i ++) data.push_back(i); // Calculate mean using 1 core double mean = 0; std::cout << "Calculating mean, 1 Thread..." << std::endl; findmean(&data, 0, data.size(), &mean); mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Repeat, using two threads std::vector<std::thread> thread; std::vector<double> result; result.push_back(0.0); result.push_back(0.0); std::cout << "Calculating mean, 2 Threads..." << std::endl; // Run threads uint32_t halfsize = data.size() / 2; uint32_t A = 0; uint32_t B, C, D; // Split the data into two blocks if(data.size() % 2 == 0) { B = C = D = halfsize; } else if(data.size() % 2 == 1) { B = C = halfsize; D = hsz + 1; } // Run with two threads thread.push_back(std::thread(findmean, &data, A, B, &(result[0]))); thread.push_back(std::thread(findmean, &data, C, D , &(result[1]))); // Join threads thread[0].join(); thread[1].join(); // Calculate result mean = result[0] + result[1]; mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Return return EXIT_SUCCESS; } void findmean(std::vector<double>* datavec, std::size_t start, std::size_t length, double* result) { for(uint32_t i = 0; i < length; i ++) { *result += (*datavec).at(start + i); } } I don't think this code is exactly wonderful, if you could suggest ways of improving it then I would be grateful for that also.

    Read the article

  • How to implement wait(); to wait for a notifyAll(); from enter button?

    - by Dakota Miller
    Sorry for the confusion I posted the Worng Logcat info. I updated the question. I want to click Start to start a thread then when enter is clicked i want the thad to continue and get the message and handle the message in the thread then output it to the main thread and update the text view. How would i start a thread to wait for enter to be pressed and get the bundle for the Handler? Here is my Code: public class MainActivity extends Activity implements OnClickListener { Handler mHandler; Button enter; Button start; TextView display; String dateString; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); enter = (Button) findViewById(R.id.enter); start = (Button) findViewById(R.id.start); display = (TextView) findViewById(R.id.Display); enter.setOnClickListener(this); start.setOnClickListener(this); mHandler = new Handler() { <=============================This is Line 31 public void handleMessage(Message msg) { // TODO Auto-generated method stub super.handleMessage(msg); Bundle bundle = msg.getData(); String string = bundle.getString("outKey"); display.setText(string); } }; } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public void onClick(View v) { // TODO Auto-generated method stub switch (v.getId()) { case R.id.enter: Message msgin = Message.obtain(); Bundle bundlein = new Bundle(); String in = "It Works!"; bundlein.putString("inKey", in); msgin.setData(bundlein); notifyAll(); break; case R.id.start: new myThread().hello.start(); break; } } public class myThread extends Thread { Thread hello = new Thread() { @Override public void run() { // TODO Auto-generated method stub super.run(); Looper.prepare(); try { wait(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } Handler Mhandler = new Handler() { @Override public void handleMessage(Message msg) { // TODO Auto-generated method stub super.handleMessage(msg); Bundle bundle = msg.getData(); dateString = bundle.getString("inKey"); } }; Looper.loop(); Message msg = Message.obtain(); Bundle bundle = new Bundle(); bundle.putString("outKey", dateString); msg.setData(bundle); mHandler.sendMessage(msg); } }; } } Here is the logcat info: 06-27 00:00:24.832: E/AndroidRuntime(18513): FATAL EXCEPTION: Thread-1210 06-27 00:00:24.832: E/AndroidRuntime(18513): java.lang.IllegalMonitorStateException: object not locked by thread before wait() 06-27 00:00:24.832: E/AndroidRuntime(18513): at java.lang.Object.wait(Native Method) 06-27 00:00:24.832: E/AndroidRuntime(18513): at java.lang.Object.wait(Object.java:364) 06-27 00:00:24.832: E/AndroidRuntime(18513): at com .example.learninghandlers.MainActivity$myThread$1.run(MainActivity.java:77)

    Read the article

  • Node.js, Cygwin and Socket.io walk into a bar... Node.js throws ENOBUFS and everyone dies...

    - by A Wizard Did It
    I'm hoping someone here can help me out, I'm not having much luck figuring this out myself. I'm running node.js version 0.3.1 on Cygwin. I'm using Connect and Socket.io. I seem to be having some random problems with DNS or something, I haven't quite figured it out. The end result is that I the server is running fine, but when a browser attempts to connect to it the initial HTTP Request works, Socket.io connects, and then the server dies (output below). I don't think it has anything to do with the HTTP request because the server gets a lot data posted to it, and it was receiving requests and responding up until my connection that killed it. I've googled around and the closest thing I've found is DNS being set improperly. It's a network program meant to run only on an internal network, so I've set the nameserver x.x.x.x in my /etc/resolv.conf to the internal DNS. I've also added nameserver 8.8.8.8 in addition. I'm not sure what else to check, but would be grateful of any help. In node.exe.stackdump Exception: STATUS_ACCESS_VIOLATION at eip=610C51B9 eax=00000000 ebx=00000001 ecx=00000000 edx=00000308 esi=00000000 edi=010FCCB0 ebp=010FCAEC esp=010FCAC4 program=\\?\E:\cygwin\usr\local\bin\node.exe, pid 3296, thread unknown (0xBEC) cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B Stack trace: Frame Function Args 010FCAEC 610C51B9 (00000000, 00000000, 00000000, 00000000) 010FCBFC 610C5B55 (00000000, 00000000, 00000000, 00000000) 010FCCBC 610C693A (FFFFFFFF, FFFFFFFF, 750334F3, FFFFFFFE) 010FCD0C 61027CB2 (00000002, F4B994D5, 010FCE64, 00000002) 010FCD98 76306B59 (00000002, 010FCDD4, 763069A4, 00000002) End of stack trace Node Output: node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ENOBUFS, No buffer space available at doConnect (net.js:642:19) at net.js:803:9 at dns.js:166:30 at IOWatcher.callback (dns.js:48:15) EDIT I'm hitting an LDAP server using http.createClient immediately after a client connects to get information, and that seems to be where the problem is that is causing ENOBUFS. I've edited the source to include && errno != ENOBUFS which now prevents the server from dying, however now the LDAP request isn't working. I'm not sure what the problem is that would cause that though. As I mentioned this is an internal only application, so I set the DNS servers in /etc/resolv.conf to the DNS servers that are being applied to the host machine. Not sure if this is part of the issue? EDIT 2 Here's some output from gdb --args ./node_g --debug ../myscript.js. I'm not sure if this is related to ENOBUFS, however, as it seems to be disconnecting immediately after connection with Socket.io [New thread 672.0x100] Error: dll starting at 0x76e30000 not found. Error: dll starting at 0x76250000 not found. Error: dll starting at 0x76e30000 not found. Error: dll starting at 0x76f50000 not found. [New thread 672.0xc90] [New thread 672.0x448] debugger listening on port 5858 [New thread 672.0xbf4] 14 Jan 18:48:57 - socket.io ready - accepting connections [New thread 672.0xed4] [New thread 672.0xd68] [New thread 672.0x1244] [New thread 672.0xf14] 14 Jan 18:49:02 - Initializing client with transport "websocket" assertion "b[1] == 0" failed: file "../src/node.cc", line 933, function: ssize_t node::DecodeWrite(char*, size_t, v8::Handle<v8::Value>, node::encoding) Program received signal SIGABRT, Aborted. 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll (gdb) backtrace #0 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll #1 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll #2 0x75030816 in WaitForSingleObjectEx () from /cygdrive/c/Windows/syswow64/KernelBase.dll #3 0x0000035c in ?? () #4 0x00000000 in ?? () (gdb)

    Read the article

  • Synchronous communication using NSOperationQueue

    - by chip_munks
    I am new to Objective C programming. I have created two threads called add and display using the NSInvocationOperation and added it on to the NSOperationQueue. I make the display thread to run first and then run the add thread. The display thread after printing the "Welcome to display" has to wait for the results to print from the add method. So i have set the waitUntilFinished method. Both the Operations are on the same queue. If i use waitUntilFinished for operations on the same queue there may be a situation for deadlock to happen(from apples developer documentation). Is it so? To wait for particular time interval there is a method called waitUntilDate: But if i need to like this wait(min(100,dmax)); let dmax = 20; How to do i wait for these conditions? It would be much helpful if anyone can explain with an example. EDITED: threadss.h ------------ #import <Foundation/Foundation.h> @interface threadss : NSObject { BOOL m_bRunThread; int a,b,c; NSOperationQueue* queue; NSInvocationOperation* operation; NSInvocationOperation* operation1; NSConditionLock* theConditionLock; } -(void)Thread; -(void)add; -(void)display; @end threadss.m ------------ #import "threadss.h" @implementation threadss -(id)init { if (self = [super init]) { queue = [[NSOperationQueue alloc]init]; operation = [[NSInvocationOperation alloc]initWithTarget:self selector:@selector(display) object:nil]; operation1 = [[NSInvocationOperation alloc]initWithTarget:self selector:@selector(add) object:nil]; theConditionLock = [[NSConditionLock alloc]init]; } return self; } -(void)Thread { m_bRunThread = YES; //[operation addDependency:operation1]; if (m_bRunThread) { [queue addOperation:operation]; } //[operation addDependency:operation1]; [queue addOperation:operation1]; //[self performSelectorOnMainThread:@selector(display) withObject:nil waitUntilDone:YES]; //NSLog(@"I'm going to do the asynchronous communication btwn the threads!!"); //[self add]; //[operation addDependency:self]; sleep(1); [queue release]; [operation release]; //[operation1 release]; } -(void)add { NSLog(@"Going to add a and b!!"); a=1; b=2; c = a + b; NSLog(@"Finished adding!!"); } -(void)display { NSLog(@"Into the display method"); [operation1 waitUntilFinished]; NSLog(@"The Result is:%d",c); } @end main.m ------- #import <Foundation/Foundation.h> #import "threadss.h" int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; threadss* thread = [[threadss alloc]init]; [thread Thread]; [pool drain]; return 0; } This is what i have tried with a sample program. output 2011-06-03 19:40:47.898 threads_NSOperationQueue[3812:1503] Going to add a and b!! 2011-06-03 19:40:47.898 threads_NSOperationQueue[3812:1303] Into the display method 2011-06-03 19:40:47.902 threads_NSOperationQueue[3812:1503] Finished adding!! 2011-06-03 19:40:47.904 threads_NSOperationQueue[3812:1303] The Result is:3 Is the way of invoking the thread is correct. 1.Will there be any deadlock condition? 2.How to do wait(min(100,dmax)) where dmax = 50.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • I am having a problem of class cast exception. Can anyone please help me out?

    - by Piyush
    This is my code: package com.example.userpage; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class UserPage extends Activity { String tv,tv1; EditText name,pass; TextView x,y; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button button = (Button) findViewById(R.id.widget44); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { name.setText(" "); pass.setText(" "); } }); x = (TextView) findViewById(R.id.widget46); y = (TextView) findViewById(R.id.widget47); name = (EditText)findViewById(R.id.widget41); pass = (EditText)findViewById(R.id.widget42); Button button1 = (Button) findViewById(R.id.widget45); button1.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { tv= name.getText().toString(); tv1 = pass.getText().toString(); x.setText(tv); y.setText(tv1); } }); } } And this is my log cat: 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): CheckJNI is ON 02-16 12:24:31.208: DEBUG/AndroidRuntime(973): --- registering native functions --- 02-16 12:24:33.498: DEBUG/AndroidRuntime(973): Shutting down VM 02-16 12:24:33.537: DEBUG/dalvikvm(973): Debugger has detached; object registry had 1 entries 02-16 12:24:33.537: INFO/AndroidRuntime(973): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:24:34.917: DEBUG/AndroidRuntime(981): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:34.927: DEBUG/AndroidRuntime(981): CheckJNI is ON 02-16 12:24:35.617: DEBUG/AndroidRuntime(981): --- registering native functions --- 02-16 12:24:38.029: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:24:38.129: DEBUG/AndroidRuntime(981): Shutting down VM 02-16 12:24:38.160: DEBUG/dalvikvm(981): Debugger has detached; object registry had 1 entries 02-16 12:24:38.168: INFO/AndroidRuntime(981): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:12.028: DEBUG/AndroidRuntime(990): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:12.038: DEBUG/AndroidRuntime(990): CheckJNI is ON 02-16 12:25:12.708: DEBUG/AndroidRuntime(990): --- registering native functions --- 02-16 12:25:15.178: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5880 bytes in 115ms 02-16 12:25:15.318: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25170.tmp 02-16 12:25:15.588: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:25:15.597: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:15.648: INFO/Process(67): Sending signal. PID: 916 SIG: 9 02-16 12:25:15.877: INFO/UsageStats(67): Unexpected resume of com.android.launcher while already resumed in com.example.userpage 02-16 12:25:17.028: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@4400ecf8 02-16 12:25:17.928: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:25:17.949: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk; Retaining data and using new 02-16 12:25:17.987: INFO/PackageManager(67): /data/app/com.example.userpage-2.apk changed; unpacking 02-16 12:25:18.037: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-2.apk' --- 02-16 12:25:18.737: DEBUG/dalvikvm(997): DexOpt: load 81ms, verify 112ms, opt 6ms 02-16 12:25:18.768: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-2.apk' (success) --- 02-16 12:25:18.799: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:18.808: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.839: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.868: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:25:19.297: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:19.297: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-2.apk 02-16 12:25:19.598: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 7979 objects / 516856 bytes in 246ms 02-16 12:25:20.498: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:20.708: DEBUG/dalvikvm(129): GC_EXPLICIT freed 124 objects / 5672 bytes in 157ms 02-16 12:25:21.838: DEBUG/dalvikvm(67): GC_EXPLICIT freed 4208 objects / 236264 bytes in 419ms 02-16 12:25:21.918: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:25:22.127: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:22.478: DEBUG/AndroidRuntime(990): Shutting down VM 02-16 12:25:22.488: DEBUG/dalvikvm(990): Debugger has detached; object registry had 1 entries 02-16 12:25:22.588: INFO/AndroidRuntime(990): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:24.137: DEBUG/AndroidRuntime(1003): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:24.147: DEBUG/AndroidRuntime(1003): CheckJNI is ON 02-16 12:25:24.817: DEBUG/AndroidRuntime(1003): --- registering native functions --- 02-16 12:25:27.450: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:25:27.628: DEBUG/AndroidRuntime(1003): Shutting down VM 02-16 12:25:27.780: INFO/AndroidRuntime(1003): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:28.018: DEBUG/dalvikvm(1003): Debugger has detached; object registry had 1 entries 02-16 12:25:28.148: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1010 uid=10036 gids={} 02-16 12:25:30.308: DEBUG/AndroidRuntime(1010): Shutting down VM 02-16 12:25:30.308: WARN/dalvikvm(1010): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): FATAL EXCEPTION: main 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Looper.loop(Looper.java:123) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at dalvik.system.NativeStart.main(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.example.userpage.UserPage.onCreate(UserPage.java:35) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): ... 11 more 02-16 12:25:30.438: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:25:31.088: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:25:32.588: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6282ms 02-16 12:25:35.267: INFO/Process(1010): Sending signal. PID: 1010 SIG: 9 02-16 12:25:35.468: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e60a90 02-16 12:25:35.900: INFO/ActivityManager(67): Process com.example.userpage (pid 1010) has died. 02-16 12:25:38.278: DEBUG/dalvikvm(176): GC_EXPLICIT freed 172 objects / 12280 bytes in 127ms 02-16 12:25:43.011: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:28:12.698: DEBUG/AndroidRuntime(1019): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:12.711: DEBUG/AndroidRuntime(1019): CheckJNI is ON 02-16 12:28:13.367: DEBUG/AndroidRuntime(1019): --- registering native functions --- 02-16 12:28:15.998: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5888 bytes in 183ms 02-16 12:28:16.539: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25171.tmp 02-16 12:28:16.867: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:28:16.867: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:17.277: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:28:17.308: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk; Retaining data and using new 02-16 12:28:17.328: INFO/PackageManager(67): /data/app/com.example.userpage-1.apk changed; unpacking 02-16 12:28:17.367: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-1.apk' --- 02-16 12:28:18.357: DEBUG/dalvikvm(1026): DexOpt: load 85ms, verify 114ms, opt 6ms 02-16 12:28:18.398: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-1.apk' (success) --- 02-16 12:28:18.428: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:18.438: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:28:18.977: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:18.988: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-1.apk 02-16 12:28:19.528: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 6733 objects / 459728 bytes in 211ms 02-16 12:28:20.138: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:20.368: DEBUG/dalvikvm(129): GC_EXPLICIT freed 892 objects / 48744 bytes in 175ms 02-16 12:28:21.317: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:28:22.827: DEBUG/dalvikvm(67): GC_EXPLICIT freed 3877 objects / 241128 bytes in 452ms 02-16 12:28:22.979: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:23.277: DEBUG/AndroidRuntime(1019): Shutting down VM 02-16 12:28:23.307: DEBUG/dalvikvm(1019): Debugger has detached; object registry had 1 entries 02-16 12:28:23.328: INFO/AndroidRuntime(1019): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): CheckJNI is ON 02-16 12:28:25.888: DEBUG/AndroidRuntime(1032): --- registering native functions --- 02-16 12:28:28.588: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:28:28.888: DEBUG/AndroidRuntime(1032): Shutting down VM 02-16 12:28:28.997: DEBUG/dalvikvm(1032): Debugger has detached; object registry had 1 entries 02-16 12:28:29.038: INFO/AndroidRuntime(1032): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:30.417: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1039 uid=10036 gids={} 02-16 12:28:32.588: DEBUG/AndroidRuntime(1039): Shutting down VM 02-16 12:28:32.598: WARN/dalvikvm(1039): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): FATAL EXCEPTION: main 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Looper.loop(Looper.java:123) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at dalvik.system.NativeStart.main(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.example.userpage.UserPage.onCreate(UserPage.java:34) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): ... 11 more 02-16 12:28:32.698: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:28:32.967: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6840ms 02-16 12:28:33.247: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:36.947: INFO/Process(1039): Sending signal. PID: 1039 SIG: 9 02-16 12:28:37.017: INFO/ActivityManager(67): Process com.example.userpage (pid 1039) has died. 02-16 12:28:37.128: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e872f8 02-16 12:28:42.087: DEBUG/dalvikvm(176): GC_EXPLICIT freed 156 objects / 11488 bytes in 145ms 02-16 12:28:45.391: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:47.177: DEBUG/SntpClient(67): request time failed: java.net.SocketException: Address family not supported by protocol

    Read the article

  • Gnome Do not Launching

    - by PyRulez
    When I try running gnome do, I get this. chris@Chris-Ubuntu-Laptop:~$ gnome-do pgrep: invalid user name: -u and it is not writable Trying sudo: chris@Chris-Ubuntu-Laptop:~$ sudo gnome-do [NetworkService] Could not initialize Network Manager dbus: Unable to open the session message bus. [Error 17:54:30.122] [SystemService] Could not initialize dbus: Unable to open the session message bus. (Do:2401): Wnck-CRITICAL **: wnck_set_client_type got called multiple times. (Do:2401): libdo-WARNING **: Binding '<Super>space' failed! [Error 17:54:30.649] [AbstractKeyBindingService] Key "" is already mapped. Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . ^[^\Full thread dump: "<unnamed thread>" tid=0x0xb7570700 this=0x0x56f18 thread handle 0x403 state : not waiting owns () at (wrapper managed-to-native) Mono.Unix.Native.Syscall.read (int,intptr,ulong) <0xffffffff> at Mono.Unix.Native.Syscall.read (int,void*,ulong) <0x00023> at Mono.Unix.UnixStream.Read (byte[],int,int) <0x0008b> at NDesk.DBus.Connection.ReadMessage () <0x0003c> at NDesk.DBus.Connection.Iterate () <0x0001b> at NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0x00033> at (wrapper native-to-managed) NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0xffffffff> at (wrapper managed-to-native) Gtk.Clipboard.gtk_clipboard_wait_is_text_available (intptr) <0xffffffff> at Gtk.Clipboard.WaitIsTextAvailable () <0x00017> at Do.Universe.SelectedTextItem.UpdateSelection (object,System.EventArgs) <0x00027> at Do.Platform.AbstractApplicationService.OnSummoned () <0x00025> at Do.Platform.ApplicationService.<ApplicationService>m__31 (object,System.EventArgs) <0x00013> at Do.Core.Controller.OnSummoned () <0x00025> at Do.Core.Controller.Summon () <0x00027> at Do.Do.Main (string[]) <0x001eb> at (wrapper runtime-invoke) <Module>.runtime_invoke_void_object (object,intptr,intptr,intptr) <0xffffffff> "<unnamed thread>" tid=0x0xb2c81b40 this=0x0x194150 thread handle 0x412 state : interrupted state owns () at (wrapper managed-to-native) System.IO.InotifyWatcher.ReadFromFD (intptr,byte[],intptr) <0xffffffff> at System.IO.InotifyWatcher.Monitor () <0x0005f> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> "Universe Update Dispatcher" tid=0x0xb29ffb40 this=0x0x569d8 thread handle 0x41b state : interrupted state owns () at (wrapper managed-to-native) System.Threading.WaitHandle.WaitOne_internal (System.Threading.WaitHandle,intptr,int,bool) <0xffffffff> at System.Threading.WaitHandle.WaitOne (System.TimeSpan,bool) <0x00133> at System.Threading.WaitHandle.WaitOne (System.TimeSpan) <0x00022> at Do.Core.UniverseManager.UniverseUpdateLoop () <0x0007a> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . It stops when I try my key combination, ctrl-alt-. It does not pop up though.

    Read the article

  • Let other computer view my localhost over a network...

    - by Smickie
    Hi, I have apache and what not running on my local machine (mac), there also another mac on the local network. How does this other machine access my localhost? For example I have a local website at example.local.net in my vhost. How can another computer on the network navigate to this site? Cheers!

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error reading data from FastCGI server

    - by Zigzag
    I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • Permissions problems with Apache / SVN

    - by Fred Wuerges
    I am installed a SVN server (v1.6) on a VPS contracted with CentOS 5, Apache 2.2 with WHM panel. I installed and configured all necessary modules and am able to create and access repositories via my web browser normally. The problem: I can not commit or import anything, always return permission errors: First error: Can not open file '/var/www/svn/test/db/txn-current-lock': Permission denied After fix the previous error: Can't open '/var/www/svn/test/db/tempfile.tmp': Permission denied And other... (and happends many others) Can't open file '/var/www/svn/test/db/txn-protorevs/0-1m.rev': Permission denied I've read and executed permissions on numerous tutorials regarding this errors, all without success. I've defined the owner as apache or nobody and different permissions for folders and files. I'm using TortoiseSVN to connect to the server. Some information that may find useful: I'm trying to perform commit through an external HTTP connection, like: svn commit http://example.com/svn/test SELinux is disabled. sestatus returns SELinux status: disabled Running the command to see the active processes of Apache, some processes are left with user/group "nobody". I tried changing the settings of Apache to not run with that user/group, but all my websites stopped working, returning this error: Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Apache process list: root@vps [/var/www]# ps aux | egrep '(apache|httpd)' root 19904 0.0 4.4 133972 35056 ? Ss 16:58 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20401 0.0 3.5 133972 27772 ? S 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 20409 0.0 3.4 133972 27112 ? S 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20410 0.0 3.8 190040 30412 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20412 0.0 3.9 190344 30944 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20414 0.0 4.4 190160 35364 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20416 0.0 4.0 190980 32108 ? Sl 17:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 20418 0.3 5.3 263028 42328 ? Sl 17:01 0:12 /usr/local/apache/bin/httpd -k start -DSSL root 32409 0.0 0.1 7212 816 pts/0 R+ 17:54 0:00 egrep (apache|httpd) SVN folder permission var/www/: drwxrwxr-x 3 apache apache 4096 Dec 11 16:41 svn/ Repository permission var/www/svn/: drwxrwxr-x 6 apache apache 4096 Dec 11 16:41 test/ Internal folders of repository var/www/svn/test: drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 conf/ drwxrwxr-x 6 apache apache 4096 Dec 11 16:41 db/ -rwxrwxr-x 1 apache apache 2 Dec 11 16:41 format* drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 hooks/ drwxrwxr-x 2 apache apache 4096 Dec 11 16:41 locks/ -rwxrwxr-x 1 apache apache 229 Dec 11 16:41 README.txt*

    Read the article

  • DHCP and DNS on none AD 2003 Server PTR is updating but no A records

    - by user29819
    I have a strange issue, I have a DHCP and DNS server running in a non AD environment, on Windows 2003 server. I setup DHCP to update DNS A and PTR records even if the client doesnt request it, but I only see PTR records updated, the A records are not created at all. The domain is "local" forward zone is called "local" and in the option 15 set to "local" (actual name) the PTR records are created with the right name (example: win64_ent.local), What am I missing here ?

    Read the article

  • Apache Error Upgrading to PHP 5.5

    - by user195385
    I am trying to upgrade php and received this error at the command line: httpd: Syntax error on line 493 of /private/etc/apache2/httpd.conf: Syntax error on line 8 of /private/etc/apache2/other/+php-osx.conf: Cannot load /usr/local/php5/libphp5.so into server: dlopen(/usr/local/php5/libphp5.so, 10): Symbol not found: _libiconv\n Referenced from: /usr/local/php5/lib/libintl.8.dylib\n Expected in: /usr/lib/libiconv.2.dylib\n in /usr/local/php5/lib/libintl.8.dylib I was trying to upgrade at http://php-osx.liip.ch/ using the command: curl -s http://php-osx.liip.ch/install.sh | bash -s 5.5 Any help would be appreciated!

    Read the article

  • Apache subdomain problem

    - by Rudiger
    Sorry if this is answered somewhere else but can't figure it out. Cant get my server to respond on the subdomain, only the main domain. Relevant info below, if you need more let me know. Listen 10.0.1.191:80 ServerName server.local:80 (i know a bit stupid but logical for me and it works) ServerName www.server.local ServerAlias server.local DocumentRoot /var/www/html/ ServerName qtp.server.local DocumentRoot /var/www/qtp/ Cheers

    Read the article

  • How can I make an alias expand to a list of recipients returned by a command?

    - by Frerich Raabe
    I have an rarely used /etc/aliases entry vmailusers: :include:/usr/local/etc/vmailusers The /usr/local/etc/vmailusers file is generated by a cronjob executing ls /home/vmail | grep -v lists > /usr/locale/etc/vmailusers chmod 0640 /usr/local/etc/vmailusers chmod mailnull:mail /usr/local/etc/vmailusers Is there a way to avoid having to run a cron job but rather execute the ls command in the very moment the vmailusers alias is used?

    Read the article

  • Network share permission issues

    - by JL
    I have an IIS server running a site, appPool is running under local system, this is done because its easier to have full permissions to certificates and other file based resources on the local server. Problem is when I try write or copy a file to a network share, permissions are obviously not in place on the remote system for the IIS server local system. Is it possible to grant permissions on the remote system to include read/write or even full access to the IIS servers local system account?

    Read the article

  • TFS Build Automation - Web Deployment Project error

    - by gracejz
    I'm trying to build a web deployment project using TFS automated build process. When I build the project directly in Visual Studio 2008, it works fine. But from TFS, I get the following error: "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (EndToEndIteration target) (1) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CoreCompile target) (1:2) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CompileConfiguration target) (1:3) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\BuildType\TFSBuild.proj" (CompileSolution target) (1:4) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\Sources\TestSolution.sln" (default target) (6) - "C:\Users\tfsservice\AppData\Local\Temp\TestProduct\TestSolution\Sources\WebDeployment\WebDeployment.wdproj" (default target) (48) - (CreateVirtualDirectory target) - C:\Program Files\MSBuild\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets(676,5): error : Some or all identity references could not be translated. I made sure that NETWORK SERVICE account has permission to access all the web folders. Any ideas?

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • locate the crash code from the crash log in my ip4 device

    - by lu yuan
    How could I locate the crash code from the crash log in my ip4 device? As the crashed thread 0 presents a serial frameworks and main.m, I couldn't locate the accurate code launched this crash and debug it. Any suggestion? Thanks in advance! Incident Identifier: B6BD84B7-CE0A-485D-A877-0FD0F5B75933 CrashReporter Key: b0b97a37f2a1e4911ce2ef34e1793e028463bb67 Hardware Model: iPhone3,1 Process: myApp [11615] Path: /var/mobile/Applications/28AE71F2-36CA-4A87-83D9-07DF2DFE74F1/myApp.app/myApp Identifier: myApp Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2012-06-09 21:12:22.792 +0800 OS Version: iPhone OS 5.1 (9B176) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000b Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libobjc.A.dylib 0x36721f78 0x3671e000 + 16248 1 MapKit 0x34e7ace6 0x34e68000 + 77030 2 CoreFoundation 0x3525f1f4 0x35247000 + 98804 3 Foundation 0x311b6740 0x31112000 + 673600 4 CoreFoundation 0x352d4acc 0x35247000 + 580300 5 CoreFoundation 0x352d4298 0x35247000 + 578200 6 CoreFoundation 0x352d303e 0x35247000 + 573502 7 CoreFoundation 0x3525649e 0x35247000 + 62622 8 CoreFoundation 0x35256366 0x35247000 + 62310 9 GraphicsServices 0x36552432 0x3654e000 + 17458 10 UIKit 0x3234ce76 0x3231b000 + 204406 11 myApp 0x0001bac0 main (main.m:16) 12 myApp 0x0001ba80 start + 32

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >