Search Results

Search found 15680 results on 628 pages for 'action filter'.

Page 462/628 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • JavaScript in different browsers

    - by PointsToShare
    Adventures with JavaScript rendered in IE 8, Chrome 15, and Firefox 8.0 I have written a little monogram about the advantages of Math and wrote a few JavaScript applications to demonstrate them. I was a bit careless and used elements on the page in my JavaScript without using any of the GetElementsByXXXX methods to identify them.  Say I had a text box named tbSeqNum into which I entered a number to be used in a computation. In my code I simply referred to its value by using it directly. Like here: Function Blah() {                 return tbSeqNum.value; } This ran fine in IE8. In IE, the elements are available as global variables. This is not the case in either Firefox or Chrome. In there one has to create the variable and only then use it. Assuming I also used tbSeqNum as the element’s ID, this works: Function Blah() {                 return GetElementById(“tbSeqNum”).value; } Naturally this corrected function also works in IE, so be warned. Also, coming from windows programming (I am long in the tooth and programmed long before the internet), I have a habit of putting an “Exit” button on my pages and setting their onclick to: onclick=”window.close()”. Again, this works fine in IE. In Firefox and chrome, it does not! There you can only close a window that you opened in the code. A window that was opened by navigation to a URL will not close.  Before I deployed mu code to my website, I painfully removed all my Exit buttons. But my greatest surprise came when I tested my pages in the various browsers. In my code I do a comparison on the performance of two algorithms used to solve the same problem. One is brute force, the other uses a mathematical formula. The compare functions runs each many times and displays the time it took for each and also the ratio. Chrome runs JavaScript between 5 and 10 times faster than Firefox and between 50 and 100 times faster that IE. Wow!!! This difference is especially remarkable when the code uses iteration. I suspect that the JS engines in Chrome and Firefox simply cache the result of a function and if it is called again with the same parameters, it returns the cached result. To see it in action play run the “How Many Squares” page in www.mgsltns.com/games.htm The host is running on Unix, so the link is case sensitive. Last Note: IE9 runs JS a bit faster, but still lags behind almost as badly. That’s All Folks!

    Read the article

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • Part 9: EBS Customizations, how to track

    - by volker.eckardt(at)oracle.com
    In the previous blogs we were concentrating on the preparation tasks. We have defined standards, we know about the tools and techniques we will start with. Additionally, we have defined the modification strategy, and how to handle such topics best. Now we are ready to take the requirements! Such requirements coming over in spreadsheets, word files (like GAP documents), or in any other format. As we have to assign some attributes, we start numbering all that and assign a short name to each of these requirements (=CEMLI reference). We may also have already a Functional person assigned, and we might involve someone from the tech team to estimate, and we like to assign a status such as 'planned', 'estimated' etc. All these data are usually kept in spreadsheets, but I would put them into a database (yes, I am from Oracle :). If you don't have any good looking and centralized application already, please give a try with Oracle APEX. It should be up and running in a day and the imported sheets are than manageable concurrently!  For one of my clients I have created this CEMLI-DB; in between enriched with a lot of additional functionality, but initially it was just a simple centralized CEMLI tracking application. Why I am pointing out again the centralized method to manage such data? Well, your data quality will dramatically increase, if you let your project members see (also review and update) "your" data.  APEX allows you to filter, sort, print, and also export. And if you can spend some time to define proper value lists, everyone will gain from. APEX allows you to work in 'agile' mode, means you can improve your application step by step. Let's say you like to reference a document, or even upload the same, you can do that. Or, you need to classify the CEMLIs by release, just add this release field, same for business area or CEMLI type. One CEMLI record may then look like this: Prepare one or two (online) reports, to be ready to present your "workload" to the project management. Use such extracts also when you work offline (to prioritize etc.). But as soon as you are again connected, feed the data back into the central application. Note: I have combined this application with an additional issue tracker.  Here the most important element is the CEMLI reference, which acts as link to any other application (if you are not using APEX also as issue tracker :).  Please spend a minute to define such a reference (see blog Part 8: How to name Customizations).   Summary: Building the bridge from Gap analyse to the development has to be done in a controlled way. Usually the information is provided differently, but it is suggested to collect all requirements centrally. Oracle APEX is a great solution to enter and maintain such information in a structured, but flexible way. APEX helped me a lot to work with distributed development teams during the complete development cycle.

    Read the article

  • Silverlight Cream for December 26, 2010 -- #1015

    - by Dave Campbell
    In this all-submittal Issue: Michael Washington(-2-), Ian T. Lackey(-2-, -3-), Sandrino Di Mattia, Colin Eberhardt(-2-), and Antoni Dol. Above the Fold: Silverlight: "A Style for the Silverlight CoverFlow Control Slider" Antoni Dol WP7: "Getting the right behaviors in your Phone 7 App – Part 1 Phone Home" (and the other two parts) Ian T. Lackey Silverlight/WPF: "A Simplified Grid Markup for Silverlight and WPF" Colin Eberhardt Shoutouts: Dennis Doomen has updated his Coding Guidelines and provided a new WhitePaper, A4 cheat sheet, and VS2010 rule sets: December Update of the Coding Guidelines for C# 3.0 and C# 4.0 From SilverlightCream.com: Windows Phone 7: Saving Data when Keyboard is visible MIchael Washington takes a possible desktop approach to a data-saving issue on WP7... good solution, and one of the commenters brought up another. Windows Phone 7 View Model (MVVM) ApplicationBar Since I'm catching up, there's another post by Michael Washington... this one is looking at the WP7 ApplicationBar, and issues if you're trying to stay MVVM-proper. Michael gets around that by creating the AppBar with a behavior, and shares with all of us! Getting the right behaviors in your Phone 7 App – Part 1 Phone Home Ian T. Lackey has begun a series where he's packaging common tasks into reusable behaviors. First up is a phone dialer launching action that can be dropped on any control in your app. Getting the right behaviors in your Phone 7 App – Part 2 Binding & Browsing In his next post, Ian T. Lackey digs into the WebBrowserTask and provides a behavior allowing you to launch a browser session straight to an URL from any WP7 control. Getting the right behaviors in your Phone 7 App – Part 3 Email ‘em In his last post (all in one day), Ian T. Lackey looks at EmailComposeTask, ending up with a behavior to pre-populate EmailAddress and Subject. Cracking a Microsoft contest or why Silverlight-WCF security is important Sandrino Di Mattia was working on an app while also having a page up for a MSDN/TechNet game, and noticed some interesting WCF traffic that he was easily able to get access to. A Simplified Grid Markup for Silverlight and WPF Colin Eberhardt built us all an attached property for the Grid control that bails us out from the ugly layout we always have to put into position... oh, also for WPF! #uksnow #silverlight The Movie! – Happy Christmas Colin Eberhardt also took some time to have fun with his Twitter/BingMaps mashup for the UKSnow hashtag... you can now playback the snowfall reports, and mouse-over the snowflakes to see the original tweet... very cool stuff, Colin! A Style for the Silverlight CoverFlow Control Slider Antoni Dol got tired of the Silverlight Slider in the CoverFlow control and crafted a very nice-looking style for the Slider ... check it out and grab the source. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Too Few Women in IT!

    - by Yolande
    Last year, only 1% of attendees at Devoxx were women . This year, Devoxx addressed the issue in a panel entitled "Why We Should Target Women." On the panel were Kim Ross, Régina ten Bruggencate, Trisha Gee, Antonio Goncalves and Claude Falguiére. The moderator was Martijn Verburg. The discussion focused on how to attract women to programming and how to get current women programmers to be more active in the community. The panelists agreed that the IT field should not just attract more women but also men of different ethnic backgrounds. The lack of women in programming is in part a cultural issue that differs from region to region. In developed countries, very few women work as programmers whereas in Brazil and India a lot of women pursue careers in IT.  Women in developed countries perceive the field as isolating and very few young women graduate in computer science.  This perception of isolation was based in reality decades ago, but that is no longer the case today. Main ideas discussed by the panel: - Parents should encourage their daughters to play with Lego and learn programming - More organizations should target girls in high schools and young women in university to expose them to programming.  Duchess organization is planning on being more involved with young girl events and mentoring. - Women tend to be more self-critical about their skills and are intimidated by high skill requirements in job advertisements. Companies should change job advertisements to get more women to interviews. - Panelists don't recommend affirmative action because women feel favored and lose credibility. They want to be judged for their skills. - Panelists recommend acting the same way when dealing with either female or male co-workers and managers - Women need mentors (men or women) to learn to become speakers at conferences and to promote themselves better - Men should be sensitive to the fact that women are alone at work to respond to men teasing. The balance of power at work is different from a social setting. - Men also experience discrimination on the job. It is more difficult for men to take time off when their children are sick, for example. Equal valuing of parental obligations could result in equal pay for women. See also: Trisha Gee Blog - http://mechanitis.blogspot.com/ Duchess Organization - http://www.jduchess.org/

    Read the article

  • IE9, HTML5 and truck load of other stuff happening around the web

    - by Harish Ranganathan
    First of all, I haven’t been updating this blog as regularly as it used to be.  Primarily, due to the fact was I was visiting a lot of cities talking about SharePoint, Web Matrix, IE9 and few other stuff.  IE9 is my new found love and I simply think we have done great work in improving the browser and browsing experiences for our users. This post would talk about IE, general things happening around the web and few misconceptions around IE (I had earlier written about IE8 and common myths When you think about the way web has transformed, its truly amazing.  Rewind back to late 90s and early 2000s, web was a luxury.  There were lot of desktop applications running around and web applications was starting to pick up.  Primarily reason was not a lot of folks were into web development and the areas of web were confined to HTML and JavaScript.  CSS was around here and there but no one took it so seriously.  XML, XSLT was fast picking up and contributed to decent web development techniques. So as a web developer all we had to worry about was, building good looking websites which worked well with IE6 and occasionally with Safari.  Firefox was  not even in the picture then and so was Chrome.  But with the various arms of W3C consortium and other bodies working actively on stuff like CSS, SVG and XHTML, few more areas came into picture when it comes to browsers supporting standards.  IE6 for sure wasn’t up to the speed and the main issue we were tackling then was privacy and piracy.  We did invest a lot of our efforts to curb piracy and one of the steps into it was that, IE7 the next version of IE would install only on genuine windows machines.  What this means, is that, people who were running pirated windows xp knowingly/unknowingly could not install IE7 and the limitations of IE6 really hurt them.  One more thing of importance is that, if you were running pirated windows, lots of chances that you didn’t get the security updates and thereby were vulnerable to run viruses/trojans on your system. Many of them actually block using IE in the first place and make it difficult to browse.  SP2 came as a big boon but again was there only for genuine windows machines. With Firefox coming as a free install and also heavily pushed by Google then, it was natural that people would try an alternative.  By then, we had started working on IE8 supporting the best standards (note HTML5, CSS 2.1 and other specs were then work in progress.  they are still) Later, Google in their infinite wisdom realized that with Firefox they were going nowhere and they released Chrome.  Now, they heavily push Chrome even for Firefox users, which is natural since its their browser. In the meanwhile, these browsers push their updates as mandatory and therefore have a very short lifecycle to add enhancements and support for stuff like CSS etc., Meanwhile, when IE8 came out, it really was the best standards supported browser and a lot of people saw our efforts in improving our browser. HTML5 is the buzz word in the industry and there is a lot of noise being made by many browsers claiming their support for it.  IE8 doesn’t have much support for HTML5.  But, with IE9 Beta, we have great support for many of HTML5 specifications.  Note that, HTML5 is still work under progress and one of the board of members working on the spec has mentioned that these specs might change and relying on them heavily is dangerous.  But, some of the advances such as video tag, etc., are indeed supported in IE9 Beta.  IE9 Beta also has full hardware acceleration support which other browsers don’t have. IE8 had advanced security features such as smartscreen filter, in-private browsing, anti-phishing and a lot of other stuff.  IE9 builds on top of these with the best in town security standards as well as support for HTML5, CSS3, Hardware acceleration, SVG and many other advancements in browser.  Read more at http://www.beautyoftheweb.com/#/highlights/html5  To summarize, IE9 Beta is really innovative and you should try it to believe what it provides.  You can visit http://www.beautyoftheweb.com/  to install as well as read more on this. Cheers !!!

    Read the article

  • Oracle VM networking under the hood and 3 new templates

    - by Chris Kawalek
    We have a few cool things to tell you about:  First up: have you ever wondered what happens behind the scenes in the network when you Live Migrate your Oracle VM server workload? Or how Oracle VM implements the network infrastructure you configure through your point & click action in the GUI? Really….how do they do this? For an in-depth view of the Oracle VM for x86 Networking model, Look ‘Under the Hood’ at Networking in Oracle VM Server for x86 with our best practices engineer in a blog post on OTN Garage. Next, making things simple in Oracle VM is what we strive every day to deliver to our user community. With that, we are pleased to bring you updates on three new Oracle Application templates: E-Business Suite 12.1.3 for Oracle ExalogicOracle VM templates for Oracle E-Business Suite 12.1.3 (x86 64-bit for Oracle Exalogic Elastic Cloud) contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install). For further details, please review the announcement.   JD Edwards EnterpriseOne 9.1 and JD Edwards EnterpriseOne Tools 9.1.2.1 for x86 servers and Oracle Exalogic The Oracle VM Templates for JD Edwards EnterpriseOne provide a method to rapidly install JD Edwards EnterpriseOne 9.1  and Tools 9.1.2.1. The complete stack includes Oracle Database 11g R2 and Oracle WebLogic Server 10.3.5 running on Oracle Linux 5. The templates can be installed to Oracle VM Server for x86 release 3.x and to the Oracle Exalogic Elastic Cloud.  PeopleSoft PeopleTools 8.5.2.10 for Oracle Exalogic This virtual deployment package delivers a "quick start" of PeopleSoft Middle-tier template on Oracle Linux for Oracle Exalogic Elastic Cloud. And last, are you wondering why we talk about “fast”, “rapid” when we refer to using Oracle VM templates to virtualize Oracle applications? Read the Evaluator Group Lab Validation report quantifying speeds of deployment up to 10x faster than with VMware vSphere. Or you can also check out our on demand webcast Quantifying the Value of Application-Driven Virtualization.

    Read the article

  • Stop Office 2010 Upload Center Icon from Displaying in the Taskbar

    - by Mysticgeek
    One of the new features in Office 2010 is the ability to upload your files to Office Web Apps. When you do, an Upload Center icon appears in the Taskbar and helps manage documents. Here’s how to stop it from showing up. If you’re running Office 2010 and upload files to the web, you’ll notice the Microsoft Office Upload Center Icon appears on the Taskbar in the Notification Area. It will stay there even after you’re done uploading the document and closed out of all Office apps. You can use this to monitor and control the documents you’re uploading to the web. Getting rid of it is fairly simple. Right-click the icon and select Settings. When the Microsoft Office Upload Center Settings window appears, under Display Options, uncheck Display icon in notification area and click OK. That is all there is to it…now it will no longer appear in the Taskbar.   After you upload your first document, it will also want to startup with Windows. You can go into msconfig and disable it from automatically starting up. If you need to access it again, it’s part of  Office 2010 Tools which you can access from the Start Menu. Or you can type upload center into the Search box in the Start Menu and hit Enter. If you upload a lot of work to Microsoft Web Apps you might find this tool useful, but if you only occasionally upload docs, you might be annoyed by it always being in the Taskbar. Similar Articles Productive Geek Tips Manage Sending 2010 Documents to the Web with Office Upload CenterHow To Manage Action Center in Windows 7What is Mobsync.exe and Why Is It Running?Taskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarDisable Office 2010 Beta Send-a-Smile from Startup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar

    Read the article

  • Add a non-Google Tasks List to Chrome

    - by Asian Angel
    Most people rely on a task list to help them remember what they need to do but not everyone wants one that is tied to a Google account. If you have been wanting an independent tasks list then join us as we look at the Tasks extension for Google Chrome. Tasks in Action As soon as you have finished installing the extension you are ready to start adding new tasks to your list. Enter your task into the “Text Area” and press “Enter” to add the task to the list. Note: Your tasks list will be retained (in the order you set) when you close and then reopen your browser. In just moments you can have your task list ready to go. Notice that there is also a “numerical indicator” attached to the “Toolbar Button” so that you will always know how many tasks you have left to complete. You can use the “drag and drop” function to rearrange your list into a more proper order if needed. When you are finished with a task all that you will need to do is click on the “Checkmark” to remove it from the list. If you need to make a new entry similar to an existing one simply right click and the text is automatically pasted into the “Text Area”. Make any desired changes and press “Enter” to add your new task to the list. Prefer to skip using the drop-down window? Click on “Tasks” at the top to open your list in a new tab instead. The tasks list looked very nice in our new tab. Being able to use the style that best suits your needs makes this a very convenient extension. Conclusion The Tasks extension is a perfect fit for anyone who needs a tasks list available but does not want to be tied down with an online account. Quick, simple and best of all hassle free. Links Download the Tasks extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageAccess Google Tasks in Chrome the Easy WayHow to Make Google Chrome Your Default BrowserAdd a To-Do List to Chrome’s New Tab PageAccess Remember The Milk in Google Chrome the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV

    Read the article

  • Windows Phone 7 Developer Tools - January 2011 Update

    - by Nikita Polyakov
    Long time no talk? So to make up for it, here is something very new – update to WP7 Dev Tools! The Windows Phone 7 Developer Tools January 2011 Update provides bits that you would install on TOP of the current WP7 Dev tools on your machine. If you are just installing the tools for the first time, this update replaces previously released October patch. In fact, it is no longer available as this January 2011 update replaces the patch entirely. What is in this update? TextBox support for Copy&Paste Updated Emulator Image that contains Copy&Paste for your testing There have been performance tweaks for the OS Minor Bugs and Fixes How does it Work? The Copy&Paste extends a existing TextBox control to have this new functionality, There is no current API access to the Clipboard or support for other controls that are not based on TextBox. If I have/Do I need to: A current application in the marketplace/No action is required Have an application that contains a TextBox in a Pivot or Panorama control surface/Text your application in provided emulator Recommendation is to move TextBox controls from directly top of controls that listen to Gesture movement to their own pop-off screens or entire pages as this might interfear with select behavior for Copy&Paste Have controls that do no inherit from TextBox/Such controls will not get new Copy&Paste behavior Note: The update materials, FAQ and Q&A do not answer WHEN the update for the OS will be sent to the phones.  Also to note - this update does NOT update your developer phone to enable Copy&Paste or any other features. Windows Phone 7 Training Kit February Update Windows Phone Training Kit has also been updated – you can grab a fresh copy here.   Where to I find more good information, documentation and training? This very awesome blog post from the Windows Phone Developer Blog - Windows Phone 7 Documentation Landscape. Official Blog Post on the Update is here. Happy coding! -Nikita   PS: I am well aware that it is Feb 4th and not January :) If you were disappointed at CES that Microsoft said nothing at all about future of WP7, don’t forget that MWC 2011 is Feb 14th – I am going to be listening for Windows Phone announcements then, as that is where the announcements were made about Windows Phone 7.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • AWS .NET SDK v2: the message-pump pattern

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/11/aws-.net-sdk-v2--the-message-pump-pattern.aspxVersion 2 of the AWS SDK for .NET has had a few pre-release iterations on NuGet and is stable, if a bit lacking in step-by-step guides. There’s at least one big reason to try it out: the SQS queue client now supports asynchronous reads, so you don’t need a clumsy polling mechanism to retrieve messages. The new approach  is easy to use, and lets you work with AWS queues in a similar way to the message-pump pattern used in the latest Azure SDK for Service Bus queues and topics. I’ve posted a simple wrapper class for subscribing to an SQS hub on gist here: A wrapper for the SQS client in the AWS SDK for.NET v2, which uses the message-pump pattern. Here’s the core functionality in the subscribe method: private async void Subscribe() { if (_isListening) { var request = new ReceiveMessageRequest { MaxNumberOfMessages = 10 }; request.QueueUrl = QueueUrl; var result = await _sqsClient.ReceiveMessageAsync(request, _cancellationTokenSource.Token); if (result.Messages.Count > 0) { foreach (var message in result.Messages) { if (_receiveAction != null && message != null) { _receiveAction(message.Body); DeleteMessage(message.ReceiptHandle); } } } } if (_isListening) { Subscribe(); } } which you call with something like this: client.Subscribe(x=>Log.Debug(x.Body)); The async SDK call returns when there is something in the queue, and will run your receive action for every message it gets in the batch (defaults to the maximum size of 10 messages per call). The listener will sit there awaiting messages until you stop it with: client.Unsubscribe(); Internally it has a cancellation token which it sets when you call unsubscribe, which cancels any in-flight call to SQS and stops the pump. The wrapper will also create the queue if it doesn’t exist at runtime. The Ensure() method gets called in the constructor so when you first use the client for a queue (sending or subscribing), it will set itself up: if (!Exists()) { var request = new CreateQueueRequest(); request.QueueName = QueueName; var response = _sqsClient.CreateQueue(request); QueueUrl = response.QueueUrl; } The Exists() check has to do make a call to ListQueues on the SNS client, as it doesn’t provide its own method to check if a queue exists. That call also populates the Amazon Resource Name, the unique identifier for this queue, which will be useful later. To use the wrapper, just instantiate and go: var queueClient = new QueueClient(“ProcessWorkflow”); queueClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. queueClient.Send(message);

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • Game development: “Play Now” via website vs. download & install

    - by Inside
    Heyo, I've spent some time looking over the various threads here on gamedev and also on the regular stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? Thanks! (* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.)

    Read the article

  • What's your advise on a potential legal suit? [closed]

    - by ohho
    I [xxx app developer] received an email from Apple that a developer [of yyy app] believes I am "infringing their copyright." Description of Issue: [xxx developer] copied my application (my application is [yyy]) feature by feature. Even their donation model is completely copied from my application. Their first release was significantly later than mine, which implies copying of the application rather than parallel development. I suffered significant financial losses because of their actions, in additional to promotion problems as many people are confused with their product. My advertising was based around the idea of a "free [yyy] application for an iPhone" and they have just taken that as a title for their application. I would appreciate if someone takes a look at their release schedule and compare it to my releases. Additionally, please take a look at their functionality and how it point by point copies the functionality of my older releases. I am asking Apple to remove their application from the App Store, and ban them from resubmitting it. Thank you for your time! [yyy developer], the developer of the [yyy] application. My response was: The code of [xxx] is written by myself, using Apple public API. The graphics elements are designed by myself. The user interface and app control are independently designed and different from other [similar type] apps (please judge yourself). In-app Purchase is iOS Apple standard API. iAd is Apple iOS standard API. I don't think features can be owned exclusively. In fact, my app comes with fewer features, as I prefer minimalist design. I don't think idea can be owned exclusively. Apple responded: Thank you for your response. Unfortunately, Apple cannot serve as arbiter for disputes among third parties. Please contact [yyy developer] directly regarding your actions. You can reach [yyy developer] through: [...]. We look forward to confirmation from both parties that this issue has been resolved. If this issue is not resolved shortly, Apple may be forced to pull your application(s) from the App Store. Then I sent my response above to [yyy developer]. [yyy developer] then asked me "to provide (my) legal address and contact details that (his) lawyer requires to file a copyright infringement suit." IMO, I don't think the [yyy developer]'s claim on "feature by feature" copy is valid. I have fewer features, completely different user interface design. However, I don't think I can afford a legal action for an app of so little financial return. So what's your advise on this? Should I just let Apple pull my app? Or is there any alternative I can consider? FYI ... UI of [xxx app]: and UI of [yyy app]:

    Read the article

  • Creating predefinied camera views - How do I move the camera to make sense while using Controller?

    - by Deukalion
    I'm trying to understand 3D but the one thing I can't seem to understand is the Camera. Right now I'm rendering four 3D Cubes with textures and I set the Project Matrix: public BasicCamera3D(float fieldOfView, float aspectRatio, float clipStart, float clipEnd, Vector3 cameraPosition, Vector3 cameraLookAt) { projection_fieldOfView = MathHelper.ToRadians(fieldOfView); projection_aspectRatio = aspectRatio; projection_clipstart = clipStart; projection_clipend = clipEnd; matrix_projection = Matrix.CreatePerspectiveFieldOfView(projection_fieldOfView, aspectRatio, clipStart, clipEnd); view_cameraposition = cameraPosition; view_cameralookat = cameraLookAt; matrix_view = Matrix.CreateLookAt(cameraPosition, cameraLookAt, Vector3.Up); } BasicCamera3D gameCamera = new BasicCamera3D(45f, GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000f, new Vector3(0, 0, 8), new Vector3(0, 0, 0)); This creates a sort of "Top-Down" camera, with 8 (still don't get the unit type here - it's not pixels I guess?) But, if I try to position the camera at the side to make "Side-View" or "Reverse Side View" camera, the camera is rotating to much until it's turned around it a couple of times. I render the boxes at: new Vector3(-1, 0, 0) new Vector3(0, 0, 0) new Vector3(1, 0, 0) new Vector3(1, 0, 1) and with the Top-Down camera it shows good, but I don't get how I can make the camera show the side or 45 degrees from top (Like 3rd person action games) because the logic doesn't make sense. Also, since every object you render needs a new BasicEffect with a new projection/view/world - can you still use the "same" camera always so you don't have to create a new View/Matrix and such for each object. It's seems weird. If someone could help me get the camera to navigate around my objects "naturally" so I can be able to set a few predtermined views to choose from it would be really helpful. Are there some sort of algorithm to calculate the view for this and perhaps not simply one value? Examples: Top-Down-View: I have an object at 0, 0, 0 when I turn the right stick on the Xbox 360 Controller it should rotate around that object kind of, not flip and turn upside down, disappear and then magically appear as a tiny dot somewhere I have no clue where it is like it feels like it does now. Side-View: I have an object at 0, 0, 0 when I rotate to sides or up and down, the camera should be able to show a little more of the periphery to each side (depending on which you look at), and the same while moving up or down.

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • Silverlight Cream for November 17, 2011 -- #1168

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Lazar Nikolov, WindowsPhoneGeek, Jesse Liberty, Peter Kuhn, Derik Whittaker, Chris Koenig, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "Facebook Graph API and Silverlight Part 2 – Publishing data" Lazar Nikolov WP7: "Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control" Colin Eberhardt Metro/WinRT/W8: "Tip/Trick when working with the Application Bar in WinRT/Metro (C#)" Derik Whittaker Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Pete Brown announced the completion of his book: It’s a wrap! I’ve completed writing Silverlight 5 in Action From SilverlightCream.com: Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control Colin Eberhardt's latest post is all about a helper class he wrote to suppress scrolling and pinch zoom of the WP7 browser control, which you might want to do if the browser is placed inside another control. Facebook Graph API and Silverlight Part 2 – Publishing data In this part 2 of his Facebook and Silverlight series, Lazar Nikolov shows how to post data to your profile or your friends' profiles Localizing a Windows Phone app Step by Step WindowsPhoneGeek's latest post is on Localizing a WP7 app .. another great tutorial with plenty of discussion, pictures, and a project to load up and follow Background Audio Part II: Copying Audio Files To Isolated Storage Continuing his WP7 series, Jesse Liberty has Part 2 of a mini-series on Background Audio up... in this episode he's using local audio and to do so, it must be in ISO Silverlight: Bugs in the multicast client In a Q/A session, Peter Kuhn was presented a nasty bug in the multicast client that he has verified exists in not only Silverlight 4 but also Silverlight 5 Beta, including a link to his entry on Connect. Tip/Trick when working with the Application Bar in WinRT/Metro (C#) Derik Whittaker offers up some good information about the Metro Application Bar and how to keep it where you want it New! Windows Phone Starter Kit for Podcasts Chris Koenig announced the release of a new starter kit for WP7... a starter kit for podcasts. Check out the links on Chris' site and the other two starter kits that are available 31 Days of Mango | Day #4: Compass Jeff Blankenburg continues with Day 4 of his Mango series with this post on the Compass and a cool app to demonstrate it 31 Days of Mango | Day #5: Gyroscope In Day 5, Jeff Blankenburg is talking about and discussing the gyroscope, of course if you have a phone as old as mine, you won't have a gyroscope and it's not on the emulator Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • links for 2011-01-06

    - by Bob Rhubart
    Coming to your town: Oracle Enterprise Cloud Summit During these full-day events, cloud experts will share real-world best practices, reference architectures, detailed customer case studies, and more. Events scheduled in cities around the world.  (tags: oracle otn cloud event) Webcast: Security and Compliance for Private Cloud Consolidation Roxana Bradescu, Senior Director for Oracle Database Security Products, discusses Oracle Database Security Solutions to securely consolidate data and meet compliance requirements within private cloud computing environments. Thursday, January 13, 2011. 10am PST | 1pm EST (tags: oracle cloud security) Answering Questions about Mobile Devices | The AppsLab "How do the numbers of Android and iOS users compare? How often are people switching? Where are all these BlackBerry and Nokia users? Do they plan to jump to Android or iOS? What about webOS? Is it relevant?" Some answers in this AppsLab survey. (tags: oracle otn enterprise2.0 mobilecomputing iphone blackberry android) Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Ashish Ray and Matthew Baier discuss Oracle’s Maximum Availability Architecture and Oracle Database 11g. (tags: oracle cloud highavailability webcast) Converting a PV vm back into an HVM vm (Wim Coekaerts Blog) "I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel...It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) @OTN_Garage: Resources for VirtualBox 4.0 Rick "@OTN_Garage" Ramsey shares links to several resources for those with a VirtualBox jones. (tags: oracle otn virtualization virtualbox) 'Federal Service Bus' Helps Belgian Government Speak a Common Language - SOA in Action Blog "The first SOA-enabled application was developed in less than two months and was fully operational in approximately 10 weeks. In addition, new FSB modules are reusable for other Belgian e-government applications, saving both time and taxpayer dollars." - Joe McKendrick (tags: soa oracle) Show Notes: Architects in the Cloud (ArchBeat Podcast) The complete 4-part interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing," is now available. (tags: oracle otn cloud podcast archbeat)

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • apt-get install is not able to access /etc

    - by HorusKol
    I put together an ubuntu 12.04 server a couple of weeks ago and everything seemed fine until this morning. Suddenly, I'm having trouble installing new packages - at first I thought there was something wrong with tinyproxy and so I tried installing squid instead. However, I get similar results: Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf".\ ... /var/lib/dpkg/info/squid3.postinst: 1: /var/lib/dpkg/info/squid3.postinst: cannot open /etc/squid3/squid.conf: No such file It seems that apt-get is not creating the configuration files needed for these programs. I haven't modified any configuration or user groups since the last successful update/install of packages. /etc is present, and is populated with a nice healthy tree of configuration files. It is owned and grouped to root, and has the properties drwxr-xr-x - all the files and folders inside seem to be fine to, as far as I can tell. I've even been able to edit/save a couple as sudo. Full output from installing tinyproxy: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: tinyproxy 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/61.6 kB of archives. After this operation, 201 kB of additional disk space will be used. Selecting previously unselected package tinyproxy. (Reading database ... 58916 files and directories currently installed.) Unpacking tinyproxy (from .../tinyproxy_1.8.3-1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up tinyproxy (1.8.3-1) ... Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf". invoke-rc.d: initscript tinyproxy, action "start" failed. dpkg: error processing tinyproxy (--configure): subprocess installed post-installation script returned error exit status 70 Errors were encountered while processing: tinyproxy E: Sub-process /usr/bin/dpkg returned an error code (1) Result of strace after installation: 18467 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 18467 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 18467 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 18467 open("/etc/tinyproxy.conf", O_RDONLY) = -1 ENOENT (No such file or directory)

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • How to load stacking chunks on the fly?

    - by Brettetete
    I'm currently working on an infinite world, mostly inspired by minecraft. A Chunk consists of 16x16x16 blocks. A block(cube) is 1x1x1. This runs very smoothly with a ViewRange of 12 Chunks (12x16) on my computer. Fine. When I change the Chunk height to 256 this becomes - obviously - incredible laggy. So what I basically want to do is stacking chunks. That means my world could be [8,16,8] Chunks large. The question is now how to generate chunks on the fly? At the moment I generate not existing chunks circular around my position (near to far). Since I don't stack chunks yet, this is not very complex. As important side note here: I also want to have biomes, with different min/max height. So in Biome Flatlands the highest layer with blocks would be 8 (8x16) - in Biome Mountains the highest layer with blocks would be 14 (14x16). Just as example. What I could do would be loading 1 Chunk above and below me for example. But here the problem would be, that transitions between different bioms could be larger than one chunk on y. My current chunk loading in action For the completeness here my current chunk loading "algorithm" private IEnumerator UpdateChunks(){ for (int i = 1; i < VIEW_RANGE; i += ChunkWidth) { float vr = i; for (float x = transform.position.x - vr; x < transform.position.x + vr; x += ChunkWidth) { for (float z = transform.position.z - vr; z < transform.position.z + vr; z += ChunkWidth) { _pos.Set(x, 0, z); // no y, yet _pos.x = Mathf.Floor(_pos.x/ChunkWidth)*ChunkWidth; _pos.z = Mathf.Floor(_pos.z/ChunkWidth)*ChunkWidth; Chunk chunk = Chunk.FindChunk(_pos); // If Chunk is already created, continue if (chunk != null) continue; // Create a new Chunk.. chunk = (Chunk) Instantiate(ChunkFab, _pos, Quaternion.identity); } } // Skip to next frame yield return 0; } }

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >