Search Results

Search found 59569 results on 2383 pages for 'data theory'.

Page 751/2383 | < Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >

  • Crawling engine architecture - Java/ Perl integration

    - by Bigtwinz
    Hi all, I am looking to develop a management and administration solution around our webcrawling perl scripts. Basically, right now our scripts are saved in SVN and are manually kicked off by SysAdmin/devs etc. Everytime we need to retrieve data from new sources we have to create a ticket with business instructions and goals. As you can imagine, not an optimal solution. There are 3 consistent themes with this system: the retrieval of data has a "conceptual structure" for lack of a better phrase i.e. the retrieval of information follows a particular path we are only looking for very specific information so we dont have to really worry about extensive crawling for awhile (think thousands-tens of thousands of pages vs millions) crawls are url-based instead of site-based. As I enhance this alpha version to a more production-level beta I am looking to add automation and management of the retrieval of data. Additionally our other systems are Java (which I'm more proficient in) and I'd like to compartmentalize the perl aspects so we dont have to lean heavily on outside help. I've evaluated the usual suspects Nutch, Droid etc but the time spent on modifying those frameworks to suit our specific information retrieval cant be justified. So I'd like your thoughts regarding the following architecture. I want to create a solution which use Java as the interface for managing and execution of the perl scripts use Java for configuration and data access stick with perl for retrieval An example use case would be a data analyst delivers us a requirement for crawling perl developer creates the required script and uses this webapp to submit the script (which gets saved to the filesystem) the script gets kicked off from the webapp with specific parameters .... Webapp should be able to create multiple threads of the perl script to initiate multiple crawlers. So questions are what do you think how solid is integration between Java and Perl specifically from calling perl from java has someone used such a system which actually is part perl repository The goal really is to not have a whole bunch of unorganized perl scripts and put some management and organization on our information retrieval. Also, I know I can use perl do do the web part of what we want - but as I mentioned before - trying to keep perl focused. But it seems assbackwards I'm not adverse to making it an all perl solution. Open to any all suggestions and opinions. Thanks

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Why Won't the WebSocket.onmessage Event Fire?

    - by SumWon
    Hey guys, After toying around with this for hours, I simply cannot find a solution. I'm working on a WebSocket server using "node.js" for a canvas based online game I'm developing. My game can connect to the server just fine, it accepts the handshake and can even send messages to the server. However, when the server responds to the client, the client doesn't get the message. No errors, nothing, it just sits there peacefully. I've ripped apart my code, trying everything I could think of to fix this, but alas, nothing. Here's a stripped copy of my server code. As I said before, the handshake works fine, the server receives data fine, but sending data back to the client does not. var sys = require('sys'), net = require('net'); var server = net.createServer(function (stream) { stream.setEncoding('utf8'); var shaken = 0; stream.addListener('connect', function () { sys.puts("New connection from: "+stream.remoteAddress); }); stream.addListener('data', function (data) { if (!shaken) { sys.puts("Handshaking..."); //Send handshake: stream.write( "HTTP/1.1 101 Web Socket Protocol Handshake\r\n"+ "Upgrade: WebSocket\r\n"+ "Connection: Upgrade\r\n"+ "WebSocket-Origin: http://192.168.1.113\r\n"+ "WebSocket-Location: ws://192.168.1.71:7070/\r\n\r\n"); shaken=1; sys.puts("Handshaking complete."); } else { //Message received, respond with 'testMessage' var d = "testMessage"; var m = '\u0000' + d + '\uffff'; sys.puts("Sending '"+m+"' to client"); var result = stream.write(m, "utf8"); sys.puts(result); /* Result comes as true, meaning that it pushed the data out. Why isn't the client seeing it?!? */ } }); stream.addListener('end', function () { sys.puts("Connection closed!"); stream.end(); }); }); server.listen(7070); sys.puts("Server Started!");

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • MySQL – Introduction to User Defined Variables

    - by Pinal Dave
    MySQL supports user defined variables to have some data that can be used later part of your query. You can save a value to a variable using a SELECT statement and later you can access its value. Unlike other RDBMSs, you do not need to declare the data type for a variable. The data type is automatically assumed when you assign a value. A value can be assigned to a variable using a SET command as shown below SET @server_type:='MySQL'; When you above command is executed, the value, MySQL is assigned to the variable called @server_type. Now you can use this variable in the later part of the code. Suppose if you want to display the value, you can use SELECT statement. SELECT @server_type; The result is MySQL. Once the value is assigned it remains for the entire session until changed by the later statements. So unlike SQL Server, you do not need to have this as part the execution code every time. (Because in SQL Server, the variables are execution scoped and dropped after the execution). You can give column name as below SELECT @server_type AS server_type; You can also SELECT statement to DECLARE and SELECT the values for a variable. SELECT @message:='Welcome to MySQL' AS MESSAGE; The result is Message -------- Welcome to MySQL You can make use of variables to effectively apply many logics. One of the useful method is to generate the row number as shown in this post MySQL – Generating Row Number for Each Row using Variable. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Site too large to officially use Google Analytics?

    - by Jeff Atwood
    We just got this email from the Google Analytics team: We love that you love our product and use it as much as you do. We have observed however, that a website you are tracking with Google Analytics is sending over 1 million hits per day to Google Analytics servers. This is well above the "5 million pageviews per month per account" limit specified in the Google Analytics Terms of Service. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. As such, starting August 23rd, 2010, the metrics in your reports will be updated once a day, as opposed to multiple times during the course of the day. You will continue to receive all the reports and features in Google Analytics as usual. The only change will be that data for a given day will appear the following day. We trust you understand the reasons for this change. I totally respect this decision, and I think it's very generous to not kick us out. But how do we do this the right way -- what's the official, blessed Google way to use Google Analytics if you're a "whale" website with lots of hits per day? Or, are there other analytics services that would be more appropriate for very large websites?

    Read the article

  • Use Microsoft PowerPivot to Access Salesforce.com Through the OData Connector

    - by dataintegration
    This article will explain how to connect to any of our OData Connectors with Microsoft Excel's PowerPivot business intelligence tool. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and PowerPivot for Excel from Microsoft. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Excel and select the PowerPivot tab at the top of the window. Step 4: Here you will click on the button labeled PowerPivot Window at the top left. Step 5: A new pop up will appear. Now select the option "From Data Feeds". Step 6: In the resulting Table Import Wizard you will enter the OData URL of the Salesforce Connector. You can find this by clicking on the Settings tab of the Salesforce Connector. It will look something like this: http://localhost:8181/sfconnector/data/conn/odata.rsc. You will also need to add authentication options in this step. To do this, click on the Advanced button and scroll down to the Security section of the resulting pop up window. Change the Integrated Security option to "Basic". You will also need to enter the User ID and Password of the user who has access to the Salesforce Connector. Step 7: When the connection to the Salesforce Connector is successful, click the Next button at the bottom of the window. Step 8: A table listing of the available tables will appear in the next window of the wizard. Here you will select which tables you want to import and click Finish. Step 9: If the import was successful, click Close and you are done! Your data is now in PowerPivot.

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Tunlr Gives Non-US Residents Access to Hulu, Netflix, and More

    - by Jason Fitzpatrick
    If you’re outside the US market and looking to enjoy US streaming services like Hulu, Netflix, and more, Tunlr is a free and simple service that will get you connected. Unlike other tools that are more expensive (both in price and in hardware/bandwidth overhead) like VPN services, Tunlr doesn’t set up a full tunnel but instead serves as an alternative DNS server that allows you to access previously blocked content. From the Tunlr FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. Hit up the link below for more information about the service, including how to set it up on various operating systems, portable devices, and gaming consoles. Tunlr [via gHacks] HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • Achieving forward compatibility with C++11

    - by mcmcc
    I work on a large software application that must run on several platforms. Some of these platforms support some features of C++11 (e.g. MSVS 2010) and some don't support any (e.g. GCC 4.3.x). I see this situation continuing on for several years (my best guess: 3-5 years). Given that, I would like set up a compatibility interface such that (to whatever degree possible) people can write C++11 code that will still compile with older compilers with a minimum of maintenance. Overall, the goal is to minimize #ifdef's as much as reasonably possible while still enabling basic C++11 syntax/features on the platforms that support them, and provide emulation on the platforms that don't. Let's start with std::move(). The most obvious way to achieve compatibility would be to put something like this in a common header file: #if !defined(HAS_STD_MOVE) namespace std { // C++11 emulation template <typename T> inline T& move(T& v) { return v; } template <typename T> inline const T& move(const T& v) { return v; } } #endif // !defined(HAS_STD_MOVE) This allow people to write things like std::vector<Thing> x = std::move(y); ... with impugnity. It does what they want in C++11 and it does the best it can in C++03. When we finally drop the last of the C++03 compilers, this code can remain as is. However, according to the standard, it is illegal to inject new symbols into the std namespace. That's the theory. My question is, practically speaking, is there any harm in doing this as a way of achieving forward compatibility?

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • Using jQuery to parse XML returned from PHP script (imgur.com API)

    - by vette982
    Here's my jQuery: var docname = $('#doc').val(); function parseXml(xml) { $(xml).find("rsp").each(function() { alert("success"); }); } $('#submit').click(function() { $.ajax({ type: "GET", url: "img_upload.php", data: "doc=" + docname, dataType: "xml", success: parseXml }); return false; }); Note that #doc is the id of a form text input box and #submit is the submit button's id. If successful, I'd like a simple "success" javascript popup to appear. Here's *img_upload.php* with my API key omitted: <?php $filename = $_GET["doc"]; $handle = fopen($filename, "r"); $data = fread($handle, filesize($filename)); // $data is file data $pvars = array('image' => base64_encode($data), 'key' => <MY API KEY>); $timeout = 30; $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'http://imgur.com/api/upload.xml'); curl_setopt($curl, CURLOPT_TIMEOUT, $timeout); curl_setopt($curl, CURLOPT_POST, 1); curl_setopt($curl, CURLOPT_POSTFIELDS, $pvars); $xml = curl_exec($curl); curl_close ($curl); ?> When directly accessed with a GET argument for "doc", img_upload.php file returns the following XML format: <?xml version="1.0" encoding="utf-8"?> <rsp stat="ok"> <image_hash>cxmHM</image_hash> <delete_hash>NNy6VNpiAA</delete_hash> <original_image>http://imgur.com/cxmHM.png</original_image> <large_thumbnail>http://imgur.com/cxmHMl.png</large_thumbnail> <small_thumbnail>http://imgur.com/cxmHMs.png</small_thumbnail> <imgur_page>http://imgur.com/cxmHM</imgur_page> <delete_page>http://imgur.com/delete/NNy6VNpiAA</delete_page> </rsp> What's the problem here? Here's the Imgur API page for reference.

    Read the article

  • For a .NET winforms datagridview I would like a combobox column to have a different set of values for each row.

    - by Seth Spearman
    Hello, I have a DataGridView that I am binding to a POCO. I have the databinding working fine. However, I have added a combobox column that I want to be different for each row. Specifically, I have a grid of purchased items, some of which have sizes (like Adult XL, Adult L) and other items are not sized (like Car Magnet.) So essentially what I want to change is the DATA SOURCE for a combobox column in the data grid. Can that be done? What event can I hook into that would allow me to change properties of certain columns FOR EACH ROW? An acceptable alternative is to change a property when the user clicks or tabs into the row. What event is that? Seth EDIT I need more help with this question. With Triduses help I am SO close but I need a bit more information. First, per the question, is the CellFormatting event really the best/only event for changing the DataSource for a combo box column. I ask because I am doing something rather resource/data intensive, not merely formatting the cell. Second, the cellformatting event is being called just by having the mouse hover over the cell. I tried to set the FormattingApplied property inside my if-block and then I check for it in the if- test but that is returning a weird error message. My ideal situation is that I would apply change the data source for the combo box once for each row and then be done with it. Finally, in order to set the data source of the combobox colunm I have to be able to cast the Cell inside my if block to a type of DataGridViewComboBoxColumn so that I can fill it with rows or set the datasource or something. Here is the code I have right now. Private Sub ProductsDataGrid_CellFormatting(ByVal sender As System.Object, ByVal e As System.Windows.Forms.DataGridViewCellFormattingEventArgs) Handles ProductsDataGrid.CellFormatting If e.ColumnIndex = ProductsDataGrid.Columns("SizeDGColumn").Index Then ' AndAlso Not e.FormattingApplied Then Dim product As LeagueOrderProductInfo = DirectCast(ProductsDataGrid.Rows(e.RowIndex).DataBoundItem, LeagueOrderProductInfo) Dim sizes As LeagueOrderProductSizeList = product.ProductSizes sizes.RemoveSizeFromList(_parentOrderDetail.SizeID) 'WHAT DO I DO HERE TO FILL THE COMBOBOX COLUMN WITH THE sizes collection. End If End Sub Please help. I am completely stuck and this task item should have taken an hour and I am 4+ hours in now. BTW, I am also open to resolving this by taking a completely different direction with it (as long as I can be done quickly.) Seth

    Read the article

  • Blackberry - Listfield layout question

    - by Kai
    I'm having an interesting anomaly when displaying a listfield on the blackberry simulator: The top item is the height of a single line of text (about 12 pixels) while the rest are fine. Does anyone know why only the top item is being drawn this way? Also, when I add an empty venue in position 0, it still displays the first actual venue this way (item in position 1). Not sure what to do. Thanks for any help. The layout looks like this: ----------------------------------- | *part of image* | title | ----------------------------------- | | title | | * full image * | address | | | city, zip | ----------------------------------- The object is called like so: listField = new ListField( venueList.size() ); listField.setCallback( this ); listField.setSelectedIndex(-1); _middle.add( listField ); Here is the drawListRow code: public void drawListRow( ListField listField, Graphics graphics, int index, int y, int width ) { listField.setRowHeight(90); Hashtable item = (Hashtable) venueList.elementAt( index ); String venue_name = (String) item.get("name"); String image_url = (String) item.get("image_url"); String address = (String) item.get("address"); String city = (String) item.get("city"); String zip = (String) item.get("zip"); EncodedImage img = null; try { String filename = image_url.substring(image_url.indexOf("crop/") + 5, image_url.length() ); FileConnection fconn = (FileConnection)Connector.open( "file:///SDCard/Blackberry/project1/" + filename, Connector.READ); if ( !fconn.exists() ) { } else { InputStream input = fconn.openInputStream(); byte[] data = new byte[(int)fconn.fileSize()]; input.read(data); input.close(); if(data.length > 0) { EncodedImage rawimg = EncodedImage.createEncodedImage( data, 0, data.length); int dw = Fixed32.toFP(Display.getWidth()); int iw = Fixed32.toFP(rawimg.getWidth()); int sf = Fixed32.div(iw, dw); img = rawimg.scaleImage32(sf * 4, sf * 4); } else { } } } catch(IOException ef) { } graphics.drawText( venue_name, 140, y, 0, width ); graphics.drawText( address, 140, y + 15, 0, width ); graphics.drawText( city + ", " + zip, 140, y + 30, 0, width ); if(img != null) { graphics.drawImage(0, y, img.getWidth(), img.getHeight(), img, 0, 0, 0); } }

    Read the article

  • Encrypted AES key too large to Decrypt with RSA (Java)

    - by Petey B
    Hello, I am trying to make a program that Encrypts data using AES, then encrypts the AES key with RSA, and then decrypt. However, once i encrypt the AES key it comes out to 128 bytes. RSA will only allow me to decrypt 117 bytes or less, so when i go to decrypt the AES key it throws an error. Relavent code: KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(1024); KeyPair kpa = kpg.genKeyPair(); pubKey = kpa.getPublic(); privKey = kpa.getPrivate(); updateText("Private Key: " +privKey +"\n\nPublic Key: " +pubKey); updateText("Encrypting " +infile); //Genereate aes key KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(128); // 192/256 SecretKey aeskey = kgen.generateKey(); byte[] raw = aeskey.getEncoded(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); updateText("Encrypting data with AES"); //encrypt data with AES key Cipher aesCipher = Cipher.getInstance("AES"); aesCipher.init(Cipher.ENCRYPT_MODE, skeySpec); SealedObject aesEncryptedData = new SealedObject(infile, aesCipher); updateText("Encrypting AES key with RSA"); //encrypt AES key with RSA Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, pubKey); byte[] encryptedAesKey = cipher.doFinal(raw); updateText("Decrypting AES key with RSA. Encrypted AES key length: " +encryptedAesKey.length); //decrypt AES key with RSA Cipher decipher = Cipher.getInstance("RSA"); decipher.init(Cipher.DECRYPT_MODE, privKey); byte[] decryptedRaw = cipher.doFinal(encryptedAesKey); //error thrown here because encryptedAesKey is 128 bytes SecretKeySpec decryptedSecKey = new SecretKeySpec(decryptedRaw, "AES"); updateText("Decrypting data with AES"); //decrypt data with AES key Cipher decipherAES = Cipher.getInstance("AES"); decipherAES.init(Cipher.DECRYPT_MODE, decryptedSecKey); String decryptedText = (String) aesEncryptedData.getObject(decipherAES); updateText("Decrypted Text: " +decryptedText); Any idea on how to get around this?

    Read the article

  • LINQ and conversion operators

    - by vik20000in
    LINQ has a habit of returning things as IEnumerable. But we have all been working with so many other format of lists like array ilist, dictionary etc that most of the time after having the result set we want to get them converted to one of our known format. For this reason LINQ has come up with helper method which can convert the result set in the desired format. Below is an example var sortedDoubles =         from d in doubles         orderby d descending         select d;     var doublesArray = sortedDoubles.ToArray(); This way we can also transfer the data to IList and Dictionary objects. Let’s say we have an array of Objects. The array contains all different types of data like double, int, null, string etc and we want only one type of data back then also we can use the helper function ofType. Below is an example     object[] numbers = { null, 1.0, "two", 3, "four", 5, "six", 7.0 };     var doubles = numbers.OfType<double>(); Vikram

    Read the article

  • CodePlex Daily Summary for Wednesday, June 16, 2010

    CodePlex Daily Summary for Wednesday, June 16, 2010New ProjectsAtomFeedBuilder: Simple and lightweight Atom feed builder. Developed in VB.Net.Cable and Wire harness tester: If you build lots of cable/wire harness' you know that testing them is a pain. I have wanted an automated cable tester for a while now but commerci...Carmenta Engine Power Pack: The target of Carmenta Engine Power Pack is to provide extensions, utilities and wrapper classes that allows developers to work more efficiently w...Customer Book: Customer Book, its like address book with facility for generating quotation for a business or a supplier to the clients.Dialector: Using this program, you can convert pure Turkish texts into different dialects; such as: Emmi, Kufurbaz, Kusdili, Laz, Peltek, Tiki, and many more....Downline Commision Generator: Analyze the compensations plan of the organizations in multi-level marketing or network marketing. Check with this tool the commision plan of the c...EmbeddedSpark 2010 Project M: Project M is a system for seamlessly interfacing a tabletop interface to portable devices placed upon it. Using image recognition and projectors, P...Event Log Creator by eVestment Alliance: Provides a simple utility to create a new source and log in the Windows event log. The utility checks if the current user is an administrator, and...ExchangeHog: Desktop/daemon application that aggregates emails from multiple pop3-accounts into single Microsoft Exchange 2010 account. For users receiving ema...Extra Time Calculator: Extra Time Calculator allows exam end times to be easily calculated for students receiving an extra time accommodation.Generic WCF Hosting Service: The Generic Host Service provides a simple, reusable, and reliable mechanism for hosting WCF services. Google Storage for .NET: Google Storage for .NET (GSN) is an open source library that provides .NET developers with easy access to the Google Storage API. The library allo...Helium: The Helium XNA game engine is a light portable game engine designed to work on many platforms and soon to be expanded on more. Currently the helium...IconizedButton Control Set: ASP.NET WebForms IconizedButton Custom Control Set. Replaces the dull Button/LinkButton/HyperLink controls with styling and left and right aligned...Jedi Council PM List: Allows for users to process Private Message Lists on the Jedi Council forums for TheForce.Net.JetPumpDesign: 本软件为蒸气喷射泵设计计算软件 作者:申阳 单位:西安交通大学过程装备与控制工程61班log4Nez: An high personalized implementation of a logging libraryMutantFramework: Provides a common set of building blocks for building enterprise applicationsNUnit Add-in for Growl Notifications: NUnit add-in which allows to send notifications to Growl when test run is started or finished, when a first test failure occurs and so on.Object Reports: Object Reports is a "proof of concept" application which provides users the ability to visualy build queries based on data stored in the relational...openTrionyx: openTrionyx is a set of tools to make easier web application development. Includes Data, Web and plain text documents tools. Developed in C#, compl...Partial Rendering control for MVC 2: This project shows a web custom control that allow to have partial rendering using async post-back (through JQuery) in a MVC 2 web application.PowerGUI Visual Studio Extension: The PowerGUI Visual Studio Extension exposes PowerGUI as an editor in Visual Studio. PowerShell developers can now write scripts directly in Visual...PowerShell Script Provider: Write your own PowerShell provider using only script, no C# required. Module definition is provided by a Windows PowerShell 2.0 Module, which may b...Scholar: Scholar is a solution/framework for .Net developers to help with the creation of distributed data processing (think SETI@home style apps). It is in...scrabb: Scrabb help people play scrabble over net.SharePointNuke: A DotNetNuke module that connects to a SharePoint server using web services API and displays the content of a specified list. SolidWorksBackConverter: a Project to Convert a solidwork file to an older version Soma - Sql Oriented MApping framework: Sql Oriented MApping framework.SPCreate: SPCreate auto store procedure creator. It's developed in c#. SpCreate as output ADO.NET Class (C# or VB.Net) and SQL Server or MS Access Store pro...std::streambuf wrapper for COM IStream: This provides a subclass of std::streambuf that wraps a COM IStream, so you can use an IStream with any C++ code that uses iostreams or the STL alg...VACID solutions: Solutions of verification problems posed in paper "Verification of Ample Correctness of Invariants of Data-structures". Developed with various tool...Viewer: Our Goal is to create a C# project that will centeralize Image and Movie Viewing in a forms application, It will also have a Specialized Webbrowser...vsXPathTester: vsXPathTester is a utility for Developer. This help them load XML file and the run their XPath Query. The Resultant is shown in window. It save the...New Releases.Net Max Framework: Version 1.0.0: Version 1.0.0 - EstableAndrew's XNA Helpers: V1.2: Features upgraded features based off of the V1.1 code for both X86 and XBOX Additions/Changes Reworked the Texture2D and Rectangle extender namesp...BaseCalendar: BaseControls 1.2: BaseControls 1.2 contains the BaseCalendar ASP.NET control. Changes: 1.2 Exposed EffectiveVisibleDate and FirstVisibleDay methods 1.1 Rendering ...Customer Book: Customer Book Code: Bronze Release PostgreSQL database dump for Customer Book. Open PgAdmin III and restore the database dump into your server. Notice User Name for t...Data Connection Suite: Data Connections Suite v1.0.0.0: This is the first release of this incomplete component, but good enought to use in a production environment (it's what we do).DigitArchive: Build 8: Now the software works on .NET 3.5 and above. So if you have Windows 7 it installs without any pre-requisites. Changes: -Works on .NET 3.5 -Now t...Doom 64 Ex (SVN Builds): Doom 64 Ex r-738: Finally a new build after so many months. There are way to many updates to even begin to write about here just download and frag away. There is a s...DotNetNuke® Media: 03.03.00a: This release is Beta!! There is no guaranteed upgrade path to the 03.03.00 release version! Please use this to help us and test what we have. Repor...Downline Commision Generator: Downline Commision Generator: Downline Commision GeneratorElmah2 : An extensable error logger for ASP.net: 1.0 Beta 1: This is a beta release be sure to report any errors etc. Be sure to check out the documentation tab on information on how to install and configure...EPiServer Template Foundation: First compiled release: First compiled release for experimenting only! :) An introductory post will be published shortly on the blog.Helium: Initial Release: This is the initial release of the Helium Engine. Please check out the documentation link for information on how to use the engine. To see a ful...IconizedButton Control Set: IconizedButton Control Set: Taking a line from Google's play book - marking everything as Beta. Seriously, I'd like to hear some feedback before moving the Development Status...JetPumpDesign: JetPumpDesign 1.0: 当前的软件可以设计5级以内的蒸汽喷射泵。Microsoft Silverlight Analytics Framework: Version 1.4.4 Installer: Tools TargetingVisual Studio 2010 Expression Blend 4 (part of Expression Studio 4) Analytics Services Included Vendor Behavior Silverlight 3...NHibernate Sidekick Library: 0.7.0: Added a few methods for use with the NHibernate 2nd level cache (EvictAllObjectsFromCache and EvictPersistentClass). I also added the boolean optio...NHibernate Sidekick Library: 0.7.5: Fix for http://nhprof.com/Learn/Alerts/DoNotUseImplicitTransactionsNito.KitchenSink: Version 9: Dependencies Nito.Linq 0.6 Beta (released 2010-06-14) Rx 1.0.2563.0 (released 2010-06-09) Supported Platforms .NET 4.0 Client Profile, with Rx. ...NQueue: Version 1.0.0.0: Version 1.0.0.0NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 0: The very first stable releasePartial Rendering control for MVC 2: Partial Rendering control for MVC 2: Here there is the source code and a MVC 2 web site as testPowerShell Script Provider: PSProvider 0.1: Requires PowerShell 2.0 RTM The functions in the attached ps1 script are the bare minimum for a working container-style provider (no subfolders.) ...Quick Performance Monitor: Version 1.4.3: Fixed issue where if an instance name contains backslash characters (\) the program would not load the performance counter properly. Also added sta...SharePointNuke: SharePointNuke 2.00.08: SharePointNuke 2.00.08 - Binary DotNetNuke 5.x module.Skype Voice Changer: 1.0 Updated Sample Code: This updated release is the accompanying code for the Skype Voice Changer article on Coding4Fun. Changes in this release: Added support for PreEmp...std::streambuf wrapper for COM IStream: Beta release (tested in a commercial project): This code has been tested in a custom Windows Search filter and property handler I wrote for a proprietary binary format. There may be some bugs, b...Sunlit World Scheme: Sunlit World Scheme - 20100615 - source and binary: This is the result of building the current source code in Debug mode. The source code is included. The binaries are in the SchemeCode folder along...Timo-Design / 40FINGERS DotNetNuke® Skinning Extensions: Style Helper Skin Object Beta: The 40FINGERS Style Helper Skin object allows you to add CSS and Javascript links and meta tags to the head of your page. It can also remove CSS l...Umbraco CMS: Umbraco 4.1 RC: This is the final test version of Umbraco 4.1 before the final release. PLEASE BE AWARE THAT UMBRACO 4.1 RC IS A .NET 4.0 RELEASE AND WON'T WORK O...VCC: Latest build, v2.1.30615.0: Automatic drop of latest buildWCF 4 Templates for Visual Studio 2010: UserNameForCertificate Template: Produces a WCF service application supporting username and password authentication, relying on message security to protect messages en route. Suppl...WCF 4 Templates for Visual Studio 2010: UserNameOverHttps Template: Produces a WCF service application supporting username and password authentication over HTTPS/SSL, relying on transport security to protect message...xUnit.net Contrib: xunitcontrib 0.4.1 alpha (ReSharper 5.1.1709 only): xunitcontrib release 0.4.1 (ReSharper runner) This release targets the current nightly build of ReSharper 5.1's Early Access Programme (build 1709)...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework).NET Transactional File ManagerSOLID by exampleSSIS Expression Editor & TesterWEI ShareChirpy - VS Add In For Handling Js, Css, and DotLess FilesASP.NET MVC Time PlannerMost Active ProjectsdotSpatialRhyduino - Arduino and Managed CodeCassandraemonpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeLightweight Fluent Workflowpatterns & practices: Enterprise Library ContribNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web Services

    Read the article

  • BPM 11g and Human Workflow Shadow Rows by Adam Desjardin

    - by JuergenKress
    During the OFM Forum last week, there were a few discussions around the relationship between the Human Workflow (WF_TASK*) tables in the SOA_INFRA schema and BPMN processes.  It is important to know how these are related because it can have a performance impact.  We have seen this performance issue several times when BPMN processes are used to model high volume system integrations without knowing all of the implications of using BPMN in this pattern. Most people assume that BPMN instances and their related data are stored in the CUBE_*, DLV_*, and AUDIT_* tables in the same way that BPEL instances are stored, with additional data in the BPM_* tables as well.  The group of tables that is not usually considered though is the WF* tables that are used for Human Workflow.  The WFTASK table is used by all BPMN processes in order to support features such as process level comments and attachments, whether those features are currently used in the process or not. For a standard human task that is created from a BPMN process, the following data is stored in the WFTASK table: One row per human task that is created The COMPONENTTYPE = "Workflow" TASKDEFINITIONID = Human Task ID (partition/CompositeName!Version/TaskName) ACCESSKEY = NULL Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki

    Read the article

  • What, if anything, to do about bow-shaped burndowns?

    - by Karl Bielefeldt
    I've started to notice a recurring pattern to our team's burndown charts, which I call a "bowstring" pattern. The ideal line is the "string" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

< Previous Page | 747 748 749 750 751 752 753 754 755 756 757 758  | Next Page >