Search Results

Search found 28784 results on 1152 pages for 'start'.

Page 638/1152 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Changing the BizTalk message output file name

    - by Bill Osuch
    By default, BizTalk creates the filename of the message dropped to a send port as %MessageID%, which is the unique identifier (GUID) of the message. What if you want to create your own filename? To start, create a simple schema, and a basic orchestration that will receive the message and send it right back out, like this: If you deploy this and wire up the ports, you can drop an xml file into your receive port and have it come out at your send port named something like {7A63CAF8-317B-49D5-871F-9FD57910C3A0}.xml. Now, we'll create a new message with a custom filename. First, create a new orchestration variable called NewFileName, of the type System.String. Next, create a second message using the same schema as the message you're receiving in the Receive shape. Now, drag a Construct Message shape to the orchestration. In the shape's properties, set Messages Constructed to be the new message you just created. Double click the Message Assignment shape (inside the Construct shape...) and paste in the following code: Message_2 = Message_1;   NewFileName = Message_1(FILE.ReceivedFileName); NewFileName = NewFileName.Replace(".xml","_"); NewFileName = NewFileName + "output_" + System.DateTime.Now.Year.ToString() + "-" + System.DateTime.Now.Month.ToString();   Message_2(FILE.ReceivedFileName) = NewFileName; Here we make a copy of the received message, get it's original file name (ReceivedFileName), replace its extension with an underscore, and date-stamp it. Finally, add a Send shape and a Port to the surface, and configure them to send the message you just created. You should wind up with an orchestration like this: Deploy it, and create a new send port. It should be just about identical to the first send port, except this time the file name will be "%SourceFileName%.xml" (without the quotes of course). Fire up the application, drop in a test file, and you should now get both the xml file named with a GUID, and a second file named something along the lines of "MySchemaTestFile_output_2011-6.xml".

    Read the article

  • Paper Gold Rush

    - by Chris G. Williams
    The last few days at the shop have been reminiscent of a marathon of Pawn Stars. Quite a few people have come in wanting to trade for store credit. Most of them have left disappointed. We did pick up a few things here and there (which hopefully I can sell.) The problem, in a nutshell, is that people get it in their head that a (YuGiOh) card is worth X amount because they looked it up 2-3 years a...go, or someone told them it was valuable... then they play it in their deck for a year without sleeves, and cram it in a binder covered in duct tape. By the time they bring the cards in to me, new sets have come out which often de-value the tournament usefulness of the card from $20 to *maybe* 50 cents, in mint condition. Which means I can offer them about 10-15 cents... only they are almost never in mint condition, which means I usually offer them nothing at all. Most of the time, you can watch their smile fade as I start going through their cards. It's kinda sad, really, since I know they think they've spent the last two years walking around with the keys to their own personal gold mine. I don't really enjoy seeing that look on a child's face. I like kids and I remember those moments when perception and reality crashed headlong into each other. It was seldom pretty. So, when I'm talking to a child, I try to take it easy on them and give them some suggestions on how to better preserve their cards. Sometimes though, it's an adult. Depending on the situation, my response to them varies pretty broadly. Most of the time though, I still feel pretty bad when it doesn't go their way.

    Read the article

  • Getting SQL table row counts via sysindexes vs. sys.indexes

    - by Bill Osuch
    Among the many useful SQL snippets I regularly use is this little bit that will return row counts in a table: SELECT so.name as TableName, MAX(si.rows) as [RowCount] FROM sysobjects so JOIN sysindexes si ON si.id = OBJECT_ID(so.name) WHERE so.xtype = 'U' GROUP BY so.name ORDER BY [RowCount] DESC This is handy to find tables that have grown wildly, zero-row tables that could (possibly) be dropped, or other clues into the data. Right off the bat you may spot some "non-ideal" code - I'm using sysobjects rather than sys.objects. What's the difference? In SQL Server 2005 and later, sysobjects is no longer a table, but a "compatibility view", meant for backward compatibility only. SELECT * from each and you'll see the different data that each returns. Microsoft advises that sysindexes could be removed in a future version of SQL Server, but this has never really been an issue for me since my company is still using SQL 2000. However, there are murmurs that we may actually migrate to 2008 some year, so I might as well go ahead and start using an updated version of this snippet on the servers that can handle it: SELECT so.name as TableName, ddps.row_count as [RowCount] FROM sys.objects so JOIN sys.indexes si ON si.OBJECT_ID = so.OBJECT_ID JOIN sys.dm_db_partition_stats AS ddps ON si.OBJECT_ID = ddps.OBJECT_ID  AND si.index_id = ddps.index_id WHERE si.index_id < 2  AND so.is_ms_shipped = 0 ORDER BY ddps.row_count DESC

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • Ribbon Search: Locate MS Office Ribbon Menu Features/Functions Quickly

    - by Kavitha
    In the new versions of Microsoft Office  everything has changed with the introduction of Ribbon menus. Even though Ribbon menus has many advantages that simplifies accessing features, at times it’s a daunting task to navigate the Ribbon menus and find a specific command. Ribbon search is one of the interesting freeware tools to overcome these complaints from users, with this one can search Office ribbon for any feature or function easily. It supports both Office 2007 and  Office 2010(the versions which have ribbon). Once Installation has completed, you can find a text box on top of the ribbon in all the office applications (Outlook, Word, PowerPoint, Excel etc.). As you type few letters of the feature you are looking for, Ribbon Search instantly displays the path through which you can access the feature. Here is a screen grab search of Ribbon Search in action When you start typing itself shows results instantly. And also it gives the path through which you can access feature you are searching for. If there are multiple ways to access the feature, it is also shown in the list. Download Ribbon Search This article titled,Ribbon Search: Locate MS Office Ribbon Menu Features/Functions Quickly, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Ambiguous in the namespace problem

    - by Agha Usman Ahmed
    From the last few days, I was ignoring an error that keep coming at the compile time. I spent some two hours on it before but didn’t get it work. The error is quit confusing and of course difficult to manage. 'ApplicationSettingsBase' is ambiguous in the namespace 'System.Configuration' 'MailMessage' is ambiguous in the namespace 'System.Net.Mail' And there are couple of other similar errors that is pointing to some ambiguous references in my project.  The confusing part is that the MailMessage object throws similar error when you are importing the old and new email namespace. For example, Imports System.Web.Mail Imports System.Net.Mail So if you are only encountering ambiguous problem in MailMessage object. It is more possible that you have define both the namespaces in your code behind which is actually confusing the compiler about your referencing object. The quick solve for this problem is that remove Imports System.Web.Mail and it should work smooth. But with me, I never used the old asp.net mail namespace in my project. Then I start looking at my references and luckily I found the problem there. Follow the steps below to investigate the issue 1. Go to your project 2. Then references 3. Right click on “System” and see properties. it should point to the following path x:\Windows\Microsoft.NET\Framework\v2.0.50727\System.dll Where x is name of your operating system directory. This was the problem with my project. I had my operating system install on “D” drive and some how it is pointing to “C” drive which is the root cause of this problem. After that I verify all my references and found 5 –6 assemblies that are pointing to wrong path and get it worked. Also note, the problem can occur in any type of project either it is website , web application etc.

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • Which ecommerce framework is fast and easy to customize?

    - by Diego
    I'm working on a project where I have to put online an ecommerce system which will require some good amount of custom features. I'm therefore looking for a framework which makes customization easy enough (from an experienced developer's perspective, I mean). Language shoul be PHP and time is a constraint, I don't have months to learn. Additionally, the ecommerce will have to handle around 200.000 products from day one, which will increase over time, hence performance is also important. So far I examined the following: Magento - Complicated and, as far as I could read, slow when database contains many products. It's also resource intensive, and we can't afford a dedicated VPS from the beginning. OpenCart - Rough at best, documentation is extremely poor. Also, it's "free" to start, but each feature is implemented via 3rd party commercial modules. OSCommerce - Buggy, inefficient, outdated. ZenCart - Derived from OSCommerce, doesn't seem much better. Prestashop - It looks like it has many incompatibilities. Also, most of its modules are commercial, which increases the cost. In short, I'm still quite undecided, as none of the above seems to satisfy the requirements. I'm open to evaluate closed source frameworks too, if they are any better, but my knowledge about them is limited, therefore I'll welcome any suggestion. Thanks for all replies.

    Read the article

  • Tips on combining the right Art Assets with a 2D Skeleton and making it flexible

    - by DevilWithin
    I am on my first attempt to build a skeletal animation system for 2D side-scrollers, so I don't really have experience of what may appear in later stages. So i ask advice from anyone that been there and done that! My approach: I built a Tree structure, the root node is like the center-of-mass of the skeleton, allowing to apply global transformations to the skeleton. Then, i make the hierarchy of the bones, so when moving a leg, the foot also moves. (I also make a Joint, which connects two bones, for utility). I load animations to it from a simple key frame lerp, so it does smooth movement. I map the animation hierarchy to the skeleton, or a part of it, to see if the structure is alike, otherwise the animation doesnt start. I think this is pretty much a standard implementation for such a thing, even if i want to convert it to a Rag Doll on the fly.. Now to my question: Imagine a game like prototype, there is a skeleton animation of the main character, which can animate all meshes in the game that are rigged the same way.. That way the character can transform into anything without any extra code. That is pretty much what i want to do for a side-scroller, in theory it sounds easy, but I want to know if that will work well. If the different people will be decently animated when using the same skeleton-animation pair. I can see this working well with a Stickman, but what about actual humans? Will the perspective look right, or i will need to dynamically change the sprites attached to bones? What do you recommend for such a system?

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

    - by Thanos Terentes Printzios
    The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges:  » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through: · A unified toolset for the development of services and composite applications.· A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments.· Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Internet of Things (IoT) Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Heba Fouad – FMW Specialist, Technology Adoption, ECEMEA Partner Business Development Date: Thursday, August 28th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Here For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com

    Read the article

  • Spec Lead call tomorrow - EG Nominations

    - by heathervc
    Tomorrow,  Thursday, 21 June, the PMO will host a call for JSR Spec Leads on the topic of Expert Group (EG) nominations (details below).  The materials and recording of this call will be posted here following the meeting. ------------------------------------------------------- Meeting information ------------------------------------------------------- Topic: SL call on EG nominations Date: Thursday, June 21, 2012 Time: 8:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 980 273 Meeting Password: nominations ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://jcp.webex.com/jcp/j.php?ED=179196322&UID=491098062&PW=NOWVlZTFiMmRj&RT=MiM0 ------------------------------------------------------- Audio conference information ------------------------------------------------------- +1 866 682 4770 conference code: 4467704 passcode: 1234 For global access numbers, see http://www.intercall.com/oracle/access_numbers.htm or call +1 408 774 4073 ------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support". To add this meeting to your calendar program (for example Microsoft Outlook), click this link: https://jcp.webex.com/jcp/j.php?ED=179196322&UID=491098062&ICS=MS&LD=1&RD=2&ST=1&SHA2=2s01OCsteUoWfJbblKZYk913m920p54uzc7PTBRx8Do=

    Read the article

  • Questions about XNA

    - by Maik Klein
    I've read tons of different threads about XNA, but I still have some questions. First of all: I have 2 years of experience programming and C# is my main language, so XNA would fit perfectly for me, but I have some concerns. People mentioned that C# has a performance loss compared to C++. Is this true? XNA only supports DirectX 9. I found the ANX framework which is pretty similar to XNA but it is capable of DirectX 11. Would this be a good alternative ? Because I'm worried about the performance loss of C#, I searched for a C++ framework and found SFML. It's based on C++ but can be integrated into C#. I already have some experience with UDK, but I am really interested in creating more by myself ( lighting physics etc ). I didn't start yet, what would you recommend me to use / learn ? I am going to create a first person shooter (3D) and I have plenty of time for this. My aim is realtime lighting, realtime global illumination, image-based reflections etc. I want to develop for Windows. Edit: I found something interesting: OpenTK. It supports the latest version of OpenGL which is on the same level as DX11 (if my knowledge is correct). It makes use of mono.

    Read the article

  • Visual Studio, .NET Framework, and language versions

    - by Scott Dorman
    Every so often a question comes up about how Visual Studio, the .NET Framework, and a .NET programming language relate to each other. Mostly, these questions have to do with versions. The reality is that these are actually three different “products” that are versioned independently of each other but are related. Looking at how Visual Studio, the .NET Framework version, and the CLR versions relate to each other results in the following: Visual Studio CLR .NET Framework Visual Studio .NET (Ranier) 1.0.3705 1.0 Visual Studio 2003 (Everett) 1.1.4322 1.1 Visual Studio 2005 (Whidbey) 2.0.50727 2.0 Visual Studio 2005 with .NET 3.0 Extensions 2.0.50727 2.0, 3.0 Visual Studio 2008 (Orcas) 2.0.50727 2.0 SP1, 3.0 SP1, 3.5 Visual Studio 2008 SP1 2.0.50727 2.0 SP2, 3.0 SP2, 3.5 SP1 Visual Studio 2010 (Hawaii) 4.0.30319 4.0 The actual Visual Studio version numbers are: Product Name Version Ship Date Visual Studio .NET 7.0.???? 02/2002 Visual Studio .NET 2002 SP1 7.0.????   Visual Studio 2003 7.1.???? 04/2003 Visual Studio 2003 SP1 7.1.6030 09/13/2006 Visual Studio 2005 8.0.5072   Visual Studio 2005 SP1   12/14/2006 Visual Studio 2008 9.0.21022.8 11/19/2007 Visual Studio 2008 SP1 9.0.30729.1   Visual Studio 2010 10.0.30319.1 04/12/2010 (For those entries that are missing information, it simply means that I didn’t already know it and/or couldn’t easily find it online.) So far, everything seems fairly reasonable and isn’t terribly difficult to keep coordinated. However, when you start trying to find language versions and how those relate to .NET Framework, CLR, or Visual Studio releases it becomes more difficult. The breakdown for the programming languages that are part of Visual Studio are: Framework CLR Language     C# VB F# 1.0 1.0.3705 1.0 7.0 - 1.1 1.1.4322 1.1 7.1 - 2.0 2.0.50727 2.0 8.0 - 3.0 2.0.50727 2.0 8.0 - 3.5 2.0.50727 2.0 9.0 - 4.0 4.0.30319 4.0 10.0 2.0   Technorati Tags: Visual Studio,.NET

    Read the article

  • Going to Tech*Ed 2010

    - by Nikita Polyakov
    After years of one night community and volunteering tasks; and even running cool events like ]inbetween[ weekend, I finally get to go to the actual event! And this time it’s not in Orlando – it’s in New Orleans - which is also exciting! I will be attending many Windows Phone 7 sessions. And will hover over the Windows Phone booths. I am also extremely excited about this short exchange I had on twitter with the brand new Windows Phone Partner Community account: @wppartner #WindwosPhone 7 Enterprise story is what we are all waiting to hear :) #wp7dev 7:51 PM Jun 1st via TweetDeck in reply to wppartner @NikitaP We'll definitely be covering that at @WPCDC but we'll also be talking about it at @TechEd_NA next week! about 4 hours ago via CoTweet in reply to NikitaP As you might know I also love Microsoft Expression Blend and SketchFlow. I will be hanging out at the Microsoft Expression TLC [Technical Learning Center] booths in Expo Hall during these times: Day Start Finish 7-Jun 10:30 AM 12:30 PM 7-Jun 7:30 PM 9:00 PM 8-Jun 2:45 PM 5:00 PM 9-Jun 2:45 PM 5:00 PM 10-Jun 12:15 PM 3:00 PM   Feel free to find me and chat me up. I’ll be twittering under @NikitaP, if you are in Florida dev community use #teched_fl hash tag. If you are going and you have a Windows Phone 6.5, iPhone/ipad, Android or a Vista/Win7 laptop with you, grab this: Kevin Wolf’s TechEd 2010 Schedule and Twitter Tool – One App, 5 Different Platforms in one word: Aaaaaaamazing!

    Read the article

  • Problems Dual Boot

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed the steps of these tutorial: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. My network setup: Telenet ISP (Belgium) Fiber cable < RJ45 cable straight to Ubuntu PC Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ nmap 192.168.0.128 Starting Nmap 5.50 ( http://nmap.org ) at 2012-06-08 19:11 CEST Nmap scan report for 192.168.0.128 Host is up (0.00045s latency). All 1000 scanned ports on 192.168.0.128 are closed (842) or filtered (158) Nmap done: 1 IP address (1 host up) scanned in 6.86 seconds ubuntu@ubuntu:~$ netstat -aunt | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 1 192.168.0.128:58616 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:56749 199.7.57.72:80 ESTABLISHED tcp 0 1 192.168.0.128:58614 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:49916 173.194.65.113:443 ESTABLISHED tcp 0 1 192.168.0.128:45699 64.34.119.101:80 SYN_SENT tcp 0 0 192.168.0.128:48404 64.34.119.12:80 ESTABLISHED tcp 0 0 192.168.0.128:54161 67.201.31.70:80 TIME_WAIT $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 $ apt-get install openipml openhpi-plugin-ipml $ openipmish > help redisp_cmd on|off > redisp_cmd on redisp set Final follow up: Step 1: BUG for network card driver r8169 Step 2: get the latest build version http://www.realtek.com/downloads/downloadsView.aspx?Langid=1&PNid=4&PFid=4&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true#RTL8110SC(L) Step 3: build / make $ cd /var/tmp/driver $ tar xvfj r8169.tar.bz2 $ make clean modules && make install $ rmmod r8169 $ depmod $ cp src/r8169.ko /lib/modules/3.xxxx/kernel/drivers/net/r8169.ko $ modprobe r8169 $ update-initramfs -u $ init 6 Voila!!

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Integration of routes that are not resources in an MVC REST style application

    - by Emil Lerch
    I would like to keep my application relatively REST-pure for the sake of consistency, but I'm struggling philosophically with the relatively few views (maybe just one) that I'll need to build that don't relate to resources directly, and therefore do not fit into a REST style. As an example, take the home page. Ruby on rails seems to bail on their otherwise RESTful approach for this very basic need of all web sites. The home page appears special: You can get it, but a get at the resource level is supposed to give you a collection of elements. I can imagine this being the list of routes maybe, but that seems a stretch, and doesn't address anything else. Getting the home page by id doesn't seem to make a whole lot of sense - what's the element of a home collection? Again, maybe routes, but a get on a route would do what? Redirect? This feels odd. You can't delete it (arguably you could allow this for administrators) Adding a second one doesn't make sense except possibly if the elements were routes Updating it might make sense for administrators, but AFAIK REST doesn't describe updates on the resource directly, only elements of the resource (this article explicitly says "UNUSED" for PUTS on the resource) Is the "right" thing to do just to special case these types of things? At the end of the day, I can wrap my head around most of applications being gathered around resources...I can't think of another good example other than a home page, but since that's the start of an application, I think it warrants some thought.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Is a code review which uses only code comments a good idea?

    - by gaRex
    Preconditions Team uses DVCS IDE supports comments parsing (like TODO and etc.) Tools like CodeCollaborator are expensive for budget Tools like gerrit are too complex for install or not usable Workflow Author publishes somewhere on central repo feature branch Reviewer fetch it and start review In case of some question/issue reviewer create comment with special label, like "REV". Such label MUST not be in production code -- only on review stage: $somevar = 123; // REV Why do echo this here? echo $somevar; When reviewer finish post comments -- it just commits with stupid message "comments" and pushes back Author pulls feature branch back and answer comments in similar way or improve code and push it back When "REV" comments have gone we can think, that review has successfully finished. Author interactively rebases feature branch, squashes it to remove those "comment" commits and now is ready to merge feature to develop or make any action that usualy could be after successful internal review IDE support I know, that custom comment tags are possible in eclipse & netbeans. Sure it also should be in blablaStorm family. Questions Do you think this methodology is viable? Do you know something similar? What can be improved in it?

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >