Search Results

Search found 21963 results on 879 pages for 'lance may'.

Page 242/879 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • How To Enlarge a Virtual Machine’s Disk in VirtualBox or VMware

    - by Chris Hoffman
    When you create a virtual hard disk in VirtualBox or VMware, you specify a maximum disk size. If you want more space on your virtual machine’s hard disk later, you’ll have to enlarge the virtual hard disk and partition. Note that you may want to back up your virtual hard disk file before performing these operations – there’s always a chance something can go wrong, so it’s always good to have backups. However, the process worked fine for us. Image Credit: flickrsven How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • assigning values to shader parameters in the XNA content pipeline

    - by Nick
    I have tried creating a simple content processor that assigns the custom effect I created to models instead of the default BasicEffect. [ContentProcessor(DisplayName = "Shadow Mapping Model")] public class ShadowMappingModelProcessor : ModelProcessor { protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context) { EffectMaterialContent shadowMappingMaterial = new EffectMaterialContent(); shadowMappingMaterial.Effect = new ExternalReference<EffectContent>("Effects/MultipassShadowMapping.fx"); return context.Convert<MaterialContent, MaterialContent>(shadowMappingMaterial, typeof(MaterialProcessor).Name); } } This works, but when I go to draw a model in a game, the effect has no material properties assigned. How would I go about assigning, say, my DiffuseColor or SpecularColor shader parameter to white or (better) can I assign it to some value specified by the artist in the model? (I think this may have something to do with the OpaqueDataDictionary but I am confused on how to use it--the content pipeline has always been a black box to me.)

    Read the article

  • Roo gem .xlsx files performance problems [closed]

    - by alvaritogf
    I am getting unacceptable performace problems by using the roo gem for reading a file by using XLSX or XLS library from this gem. Someone may suggest me an alternative about how to parse an .XLSX file? parsed_file = Excel.new(filename,false, :ignore) if (file_format.upcase == "XLS") parsed_file = Excelx.new(filename,false, :ignore) if (file_format.upcase == "XLSX") raise t "#{filename} is not an Excel file!" if (!parsed_file) parsed_file.default_sheet = parsed_file.sheets[0]#'Sheet2'#oo.sheets[1] first_row = parsed_file.first_row last_row = parsed_file.last_row first_column = parsed_file.first_column last_column = parsed_file.last_column #logger.info "#### Total Rows:#{last_row}, first_row:#{first_row}, last_row:#{last_row}, first_column:#{first_column}, last_column:#{last_column}" first_row.upto(last_row) do |current_line| # Stuff .... end Thanks

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Concrete examples of Python's "only one way to do it" maxim

    - by Charles Roper
    I am learning Python and am intrigued by the following point in PEP 20 The Zen of Python: There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Could anyone offer any concrete examples of this maxim? I am particularly interested in the contrast to other languages such as Ruby. Part of the Ruby design philosophy (originating with Perl, I think?) is that multiple ways of doing it is A Good Thing. Can anyone offer some examples showing the pros and cons of each approach. Note, I'm not after an answer to which is better (which is probably too subjective to ever be answered), but rather an unbiased comparison of the two styles.

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Chrome Apps Office Hours: Networking APIs

    Chrome Apps Office Hours: Networking APIs Ask and vote for questions at goo.gl Writing a web server in a web browser is a little meta, but it's one of the many new scenarios unlocked by the new APIs available in Chrome Apps! Join Paul Kinlan and Pete LePage as they explore the networking APIs available to Chrome Apps. They'll show you how you you can write your own web server, and connect to other devices via the network. While the web server in a browser may not be a common scenario, accessing the TCP and UDP stack is extremely powerful and will allow your apps to connect to other hardware or services like never before. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Yes, you can benefit from both data and backup compression

    - by AaronBertrand
    Earlier today, MSSQLTips posted a backup compression tip by Thomas LaRock ( blog | twitter ). In that article, Tom states: "If you are already compressing data then you will not see much benefit from backup compression." I don't want to argue with a rock star, and I will concede that he may be right in some scenarios. Nonetheless, I tweeted that "it depends;" Thomas then asked for "an example where you have data comp and you also see a large benefit from backup comp?" My initial reaction came about...(read more)

    Read the article

  • View Docs and PDFs Directly in Google Chrome

    - by Matthew Guay
    Would you like to view documents, presentations, and PDFs directly in Google Chrome?  Here’s a handy extension that makes Google Docs your default online viewer so don’t have to download the file first. Getting Started By default, when you come across a PDF or other common document file online in Google Chrome, you’ll have to download the file and open it in a separate application. It’d be much easier to simply view online documents directly in Chrome.  To do this, head over to the Docs PDF/PowerPoint Viewer page on the Chrome Extensions site (link below), and click Install to add it to your browser. Click Install to confirm that you want to install this extension. Extensions don’t run by default in Incognito mode, so if you’d like to always view documents directly in Chrome, open the Extensions page and check Allow this extension to run in incognito. Now, when you click a link for a document online, such as a .docx file from Word, it will open in the Google Docs viewer. These documents usually render in their original full-quality.  You can zoom in and out to see exactly what you want, or search within the document.  Or, if it doesn’t look correct, you can click the Download link in the top left to save the original document to your computer and open it in Office.   Even complex PDF render very nicely.  Do note that Docs will keep downloading the document as you’re reading it, so if you jump to the middle of a document it may look blurry at first but will quickly clear up. You can even view famous presentations online without opening them in PowerPoint.  Note that this will only display the slides themselves, but if you’re looking for information you likely don’t need the slideshow effects anyway.   Adobe Reader Conflicts If you already have Adobe Acrobat or Adobe Reader installed on your computer, PDF files may open with the Adobe plugin.  If you’d prefer to read your PDFs with the Docs PDF Viewer, then you need to disable the Adobe plugin.  Enter the following in your Address Bar to open your Chrome Plugins page: chrome://plugins/ and then click Disable underneath the Adobe Acrobat plugin. Now your PDFs will always open with the Docs viewer instead. Performance Who hasn’t been frustrated by clicking a link to a PDF file, only to have your browser pause for several minutes while Adobe Reader struggles to download and display the file?  Google Chrome’s default behavior of simply downloading the files and letting you open them is hardly more helpful.  This extension takes away both of these problems, since it renders the documents on Google’s servers.  Most documents opened fairly quickly in our tests, and we were able to read large PDFs only seconds after clicking their link.  Also, the Google Docs viewer rendered the documents much better than the HTML version in Google’s cache. Google Docs did seem to have problem on some files, and we saw error messages on several documents we tried to open.  If you encounter this, click the Download link in the top left corner to download the file and view it from your desktop instead. Conclusion Google Docs has improved over the years, and now it offers fairly good rendering even on more complex documents.  This extension can make your browsing easier, and help documents and PDFs feel more like part of the Internet.  And, since the documents are rendered on Google’s servers, it’s often faster to preview large files than to download them to your computer. Link Download the Docs PDF/PowerPoint Viewer extension from Google Similar Articles Productive Geek Tips Integrate Google Docs with Outlook the Easy WayGoogle Image Search Quick FixView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeHow To Export Documents from Google Docs to Your Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • UK Connected Systems User Group - Udi Dahan Event Topic change

    - by Michael Stephenson
    Hi Just wanted to get the word out about a change to the may user group event.  Udi Dahan will present a new topic which he has not presented in the UK before.  Details below. To register for this event please refer to: http://ukconnectedsystemsusergroup.org/UpcomingEvents.aspx Title: High Availability - A Contrarian View   Abstract: Many developers are aware of the importance of high availability, critically analyzing any single points of failure in the infrastructure. Those same developers rarely give a second thought to the periods of time when a system is being upgraded. Even if all the servers are running, most systems cannot function in-between versions. Yet with the increased pace of business, users are demanding ever more frequent releases. The poor maintenance programmers and system administrators are left holding the bag long after the architecture that sealed their fate was formulated. Join Udi for some different perspectives on high availability - architecture and methodology for the real world.

    Read the article

  • svn checkout through proxy doesn't work

    - by Hoghweed
    I'm on an ubuntu 11.04 x64 I'm trying to svn checkout trough a proxy, I edited the servers file to correctly set the http proxy informations to correctly establish a connection, but I'm still having errors and the checkout it's not possible. this is the error: svn checkout http://75.101.130.236/svn/mspdd/ svn: OPTIONS of 'http://75.101.130.236/svn/mspdd': Could not authenticate to proxy server: ignored Kerberos challenge, ignored NTLM challenge, GSSAPI authentication error: Unspecified GSS failure. Minor code may provide more information: Credentials cache file '/tmp/krb5cc_1000' not found (http://75.101.130.236) Trying to access through browser works well but not from terminal.. any idea? thanks a lot

    Read the article

  • How to Answer a Stupid Interview Question the Right Way

    - by AjarnMark
    Have you ever been asked a stupid question during an interview; one that seemed to have no relation to the job responsibilities at all?  Tech people are often caught off-guard by these apparently irrelevant questions, but there is a way you can turn these to your favor.  Here is one idea. While chatting with a couple of folks between sessions at SQLSaturday 43 last weekend, one of them expressed frustration over a seemingly ridiculous and trivial question that she was asked during an interview, and she believes it cost her the job opportunity.  The question, as I remember it being described was, “What is the largest byte measurement?”.  The candidate made up a guess (“zetabyte”) during the interview, which is actually closer than she may have realized.  According to Wikipedia, there is a measurement known as zettabyte which is 10^21, and the largest one listed there is yottabyte at 10^24. My first reaction to this question was, “That’s just a hiring manager that doesn’t really know what they’re looking for in a candidate.  Furthermore, this tells me that this manager really does not understand how to build a team.”  In most companies, team interaction is more important than uber-knowledge.  I didn’t ask, but this could also be another geek on the team trying to establish their Alpha-Geek stature.  I suppose that there are a few, very few, companies that can build their businesses on hiring only the extreme alpha-geeks, but that certainly does not represent the majority of businesses in America. My friend who was there suggested that the appropriate response to this silly question would be, “And how does this apply to the work I will be doing?” Of course this is an understandable response when you’re frustrated because you know you can handle the technical aspects of the job, and it seems like the interviewer is just being silly.  But it is also a direct challenge, which may not be the best approach in interviewing.  I do have to admit, though, that there are those folks who just won’t respect you until you do challenge them, but again, I don’t think that is the majority. So after some thought, here is my suggestion: “Well, I know that there are petabytes and exabytes and things even larger than that, but I haven’t been keeping up on my list of Greek prefixes that have not yet been used, so I would have to look up the exact answer if you need it.  However, I have worked with databases as large as 30 Terabytes.  How big are the largest databases here at X Corporation?”  Perhaps with a follow-up of, “Typically, what I have seen in companies that have databases of your size, is that the three biggest challenges they face are: A, B, and C.  What would you say are the top 3 concerns that you would like the person you hire to be able to address?…Here is how I have dealt with those concerns in the past (or ‘Here is how I would tackle those issues for you…’).” Wait! What just happened?!  We took a seemingly irrelevant and frustrating question and turned it around into an opportunity to highlight our relevant skills and guide the conversation back in a direction more to our liking and benefit.  In more generic terms, here is what we did: Admit that you don’t know the specific answer off the top of your head, but can get it if it’s truly important to the company. Maybe for some reason it really is important to them. Mention something similar or related that you do know, reassuring them that you do have some knowledge in that subject area. Draw a parallel to your past work experience. Ask follow-up questions about the company’s specific needs and discuss how you can fulfill those. This type of thing requires practice and some forethought.  I didn’t come up with this answer until a day later, which is too late when you’re interviewing.  I still think it is silly for an interviewer to ask something like that, but at least this is one way to spin it to your advantage while you consider whether you really want to work for someone who would ask a thing like that.  Remember, interviewing is a two-way process.  You’re deciding whether you want to work there just as much as they are deciding whether they want you. There is always the possibility that this was a calculated maneuver on the part of the hiring manager just to see how quickly you think on your feet and how you handle stupid questions.  Maybe he knows something about the work environment and he’s trying to gauge whether you’ll actually fit in okay.  And if that’s the case, then the above response still works quite well.

    Read the article

  • component Initialization in component-based game architectures

    - by liortal
    I'm develping a 2d game (in XNA) and i've gone slightly towards a component-based approach, where i have a main game object (container) that holds different components. When implementing the needed functionality as components, i'm now faced with an issue -- who should initialize components? Are components usually passed in initialized into an entity, or some other entity initialized them? In my current design, i have an issue where the component, when created, requires knowledge regarding an attached entity, however these 2 events may not happen at the same time (component construction, attaching to a game entity). I am looking for a standard approach or examples of implementations that work, that overcome this issue or present a clear way to resolve it

    Read the article

  • Turbo C++ to Visual Studio 2010 migration [closed]

    - by BigGenius
    OK, based on my previous questions and your help., I have gone to install Visual Studio Express. But now problem is, the programs which I successfully code at home on Visual Studio don't run on Turbo C++ compiler at school (assuming I type the program instead of exporting code). Is there anything I can do? Also I am just learning basic syntax and data handling, loops, structures, arrays and all. But Visual Studio has auto completion and pretty typing (which may be advantageous) but crap for a beginner getting hold on to language. Sorry, if I have been unclear. But what should I do? This will make me lazy programmer and will reflect in my grades. Is there any other IDE, which I can use, very similar to Turbo C++ and able to run in Windows 7 in fullsreen mode.

    Read the article

  • Implementing a "state-machine" logic for methods required by an object in C++

    - by user827992
    What I have: 1 hypothetical object/class + other classes and related methods that gives me functionality. What I want: linking this object to 0 to N methods in realtime on request when an event is triggered Each event is related to a single method or a class, so a single event does not necessarily mean "connect this 1 method only" but can also mean "connect all the methods from that class or a group of methods" Avoiding linked lists because I have to browse the entire list to know what methods are linked, because this does not ensure me that the linked methods are kept in a particular order (let's say an alphabetic order by their names or classes), and also because this involve a massive amount of pointers usage. Example: I have an object Employee Jon, Jon acquires knowledge and forgets things pretty easily, so his skills may vary during a period of time, I'm responsible for what Jon can add or remove from his CV, how can I implement this logic?

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • Upgrade problem during Ubuntu 11.10

    - by pankaj singh
    During upgrading in last moment, show memory problem but I restart my laptop in hurry than it give that type of message what can i do? *starting automatic crash report generation [fail] PulseAudio configured for pre-user sessions saned disabled :edit/etc/default/saned *stopeing save kernel messages [ok] After than invoking init scripts through /etc/init.d, use the service(8) utility,eg. service S90binfmt-support start Since the scripts you are attempting to invoke has been converted to an Upstart pb, you may also use the start(8)utility, e.g start S90binfmt-support start:Unknown job:S90binfmt-support *Stopping anac(h)ronistic c[ok] *checking battery state.. [ok] * stopping System V runlevel compatibility [ok]

    Read the article

  • Welcome to Gotham High [Video]

    - by Asian Angel
    Goodbye Metropolis, hello insane asylum. That is the state of life for young Harley Quinn now that she has moved to Gotham. With only two high schools to choose between, her parents have decided to send her to Gotham High where life is anything but dull! Note: Video contains some language that may be considered inappropriate. Gotham High (2013) Dark Knight Batman PARODY! [via Neatorama] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Why use flash to as content headings? Why not?

    - by Itai
    A large number of sites lately which use Flash to replace what would be simple HTML headings (h1, h2, h3). Some of them do it consistently for every header. Why would you do this? What are the pros and cons of this? This seems strange to me and really inefficient, particularly since a page can have dozens of headers. I notice this because I use Flash Block (which temporarily disables Flash), so not everyone may have seen it. Just today I landed here with 4 flash headers. I've seen dozens of such use of Flash this week alone, and I am wondering if this has some benefits.

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • Estimating time for planning and technical design using Evidence Based Scheduling

    - by Turgs
    I'm at the beginning of a development project in a large organization. The Functional Requirements are currently being worked out and documented with our business stakeholders by our Enterprise Design department. I'm required to produce Technical Design Documents and manage the team to actually build the solution. I'm wanting to try Evidence Based Scheduling, but as I understand, part of that is breaking the job down into small tasks that are less than 14 hours in duration, which requires me to have already done the Technical Design. Therefore, can Evidence Based Scheduling only be used after the Technical Design has been done? How do you then plan and estimate the time it may take to come up with the Technical Design?

    Read the article

  • What is the best practice to move sprites using mouse order in Tile games?

    - by Robin-Hood
    I am trying to make my first Tile-game using XNA. I have no problem drawing the map layers using TiledLib from codeplex, but, now I want to give sprite an (order) to move to a specific position on map, by selecting the sprite (left mouse click) and then right mouse click somewhere on the map to specify the target position. I don’t know what is the best practice to move sprite this way, considering that there may be collision objects in the direct path. what is the best practice to do this? Is there any demo covering this issue? thanks. BTW: I couldn’t upload snapshot because of my low score :(

    Read the article

  • Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    - by Jason Fitzpatrick
    If you’re currently using any 64-bit version of Windows you may have noticed there are two “Program Files” folders, one for 64-bit and one for 32-bit apps. Why does Windows need to sub-divide them? Read on to see why. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >