Search Results

Search found 85480 results on 3420 pages for 'change data capture'.

Page 97/3420 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder?

    - by Selin Peck
    Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder? I used to start office work by removing chrome User Data, replacing it with my own User Data copied from my external drive, saving the original User Data to other folder. Before leaving in the evening, I will take back my own User Data, and bring back the original User Data where it is originally saved. Is this process advisable? Would I be safe this way or if not, where else does chrome save data outside of User Data folder in AppData? Also, how is the process in Mozilla Firefox?

    Read the article

  • How do I populate multiple records of data into a PDF form like a mail-merge?

    - by user38801
    I have Acrobat Pro, and I have a PDF with a form on it. Assuming the fields in the form correspond to a data source (like rows in an RDBMS table or xml file), I want to then print multiple copies of the PDF file, with each copy having the values of a different row in the data source. It is preferable to directly interface with an actual database, rather than having to save an XML file every time I do this. If this involves programming that's cool too, I only posted here because the question didn't seem appropriate for StackOverflow. Thanks!

    Read the article

  • Ways to improve completeness of files for data recovery and scanning?

    - by SteveO
    I am using R-studio for data recovery on one of my ntfs partition. There is a pdf file about 16MB, but the software can only recover 15MB of it. So I am thinking about what ways can be used to improve the quality of scanning and recovery by the software? I am looking around its preferences. I am not quite sure whether there are some adjustable parameters for scanning and recovery which can be fine-tuned to improve the quality? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Join our webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Data integration team has organized a series of webcasts for this summer. We are kicking it off this Thursday June 30th at 10am PT with a product update webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate. In this webcast you will hear from product management about the new patch updates to both GoldenGate 11g R1 and ODI 11gR1. Jeff Pollock, Sr. Director of Product Management for ODI will talk about the new features in Oracle Data Integrator 11.1.1.5, including the data lineage integration with OBI EE, enhanced web services to support flexible architectures as well as capabilities for efficient object execution such as Load Plans. Jeff will discuss support for complex files and performance enhancements. Chris McAllister, Sr. Director of Product Management for Oracle GoldenGate will cover the new features of Oracle GoldenGate 11.1.1.1 such as increased data security by supporting Oracle Database Advanced Security option, deeper integration with Oracle Database, and the expanded list of heterogeneous databases GoldenGate supports . Chris will also talk about the new Oracle GoldenGate 11gR1 release for HP NonStop platform and will provide information on our strategic direction for product development. Join us this Thursday at 10am PT/ 1pm ET to hear directly from Data Integration Product Management . You can register here for the June 30th webcast as well as for the upcoming ones in our summer webcast series.

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • Announcing Sesame Data Browser

    - by Fabrice Marguerie
    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you to publish and consume data on the web, the OData SDK (with client libraries for .NET, Java, Javascript, PHP, iPhone, and more), a list of OData producers, and a list of OData consumers.This is where Sesame Data Browser comes into play. It's one of the tools you can use today to consume OData.I'll let you have a look, but be aware that this is just a preview and many additional features are coming soon.Sesame Data Browser is part of a bigger picture than just OData that will take shape over the coming months. Sesame is a project I've been working on for many months now, so what you see now is just a start :-)I hope you'll enjoy what you see. Let me know what you think.

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • How to Change the Default Application for Android Tasks

    - by Jason Fitzpatrick
    When it comes time to switch from using one application to another on your Android device it isn’t immediately clear how to do so. Follow along as we walk you through swapping the default application for any Android task. Initially changing the default application in Android is a snap. After you install the new application (new web browser, new messaging tool, new whatever) Android prompts you to pick which application (the new or the old) you wish to use for that task the first time you attempt to open a web page, check your text message, or otherwise trigger the event. Easy! What about when it comes time to uninstall the app or just change back to your old app? There’s no helpful pop-up dialog box for that. Read on as we show you how to swap out any default application for any other with a minimum of fuss. Latest Features How-To Geek ETC How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally Now Together and Complete – McBain: The Movie [Simpsons Video] Be Creative by Using Hex and RGB Codes for Crayola Crayon Colors on Your Next Web or Art Project [Geek Fun] Flash Updates; Finally Supports Full Screen Video on Multiple Monitors 22 Ways to Recycle an Altoids Mint Tin Make Your Desktop Go Native with the Tribal Arts Theme for Windows 7 A History of Vintage Transformers: Decepticons Edition [Infographic]

    Read the article

  • Change in Job Title and Responsibilities

    - by John Conwell
    I've spent the past 7 years focused primarily on code and database performance.  It's an area that I have a passion for, as well as a propensity.  But what I've found is that its very hard to change the culture of a development environment.  You can teach performance, you can encourage performance, you might see slight shift in how devs think about performance.  But without full management backing and support you wont get long lasting changes in the development culture.  And in the end, you are back to being the "Perf Guy", fixing performance design flaws, after the fact, one by one by one. Which is why last year I asked my boss to changed my title and responsibilities to more naturally align with the team I was working for.  So now I'm a Computing Research Engineer (vague, I know), researching in the field of Big Data analytics and visualization. I've found this change revitalizing and a lot of fun.  And given the nature of Big Data (its, um…big) the performance aspects are always ever present.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • Free NOSQL database for use with C# client [closed]

    - by Mitten
    I've never used NOSQL databases before, but so far it seems like the best data storage solution for my project. I am going to implement a datamining application. The data I would like to mine is thousands of documents which cannot be imported into datamining applications. To make to import easier and faster (than importing thousands of documents) I am planning to import these documents into a NOSQL database first and when import NOSQL database into datamining software. At the very least once I have all the data in NOSQL database I should be able to code simplest datamining logic myself. Am I correct that NOSQL databases allow to creates records of data, but they don't mandate all the records to adhere to the same data schema (same column names/types in a classic table oriended SQL databases)? I think for each document I would create a row/entry/object (not sure what is the correct term is in use in NOSQL world) which would be a string id, few (columns) with unstructured text data, and a dozens of columns mostly of datetime and integer types. From its name NOSQL does not support SQL query syntax, but it support locating the object(row/entry?) by its unique id. Does NOSQL support qyuering objects using property=value syntax? Unfortunately most of free NOSQL db only support Java/C++ clients, which free NOSQL db would you recommend for a C# programmer?

    Read the article

  • SQL SERVER – Download PSSDIAG Data Collection Utility

    - by pinaldave
    During an early career of mine as a database consultant – when I was dealing with SQL Server 2000, I often needed to collect various data related to SQL Server. My favorite tool to collect the data is PSSDIAG tool. It is a general purpose diagnostic collection utility that Microsoft Product Support Services uses to collect various logs and data files. It collects Performance Monitor logs, SQL Profiler traces, SQL Server blocking script output, Windows Event Logs, and SQLDIAG output. The data collected can be used by SQL Nexus tool which help you troubleshoot SQL Server performance problems. PSSDIAG is a wrapper around other data collection APIs and utilities, the performance impact of running PSSDIAG is generally equal to the impact of the traces that PSSDIAG has been configured to capture. If you are using SQL Server 2000 – you need to seriously consider to upgrading it to SQL Server 2012. Here is a PSSDIAG Data Collection Utility updated in August 2012. My friend and SQL Server Expert Amit Benerjee have written an excellent article on this subject, I encourage all of you to read the same. Note: For SQL Server 2012 there is SQLDiag. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Using XML as data storage

    - by Kian Mayne
    I was thinking about the XML format and the following quote: “XML is not a database. It was never meant to be a database. It is never going to be a database. Relational databases are proven technology with more than 20 years of implementation experience. They are solid, stable, useful products. They are not going away. XML is a very useful technology for moving data between different databases or between databases and other programs. However, it is not itself a database. Don't use it like one.“ -Effective XML: 50 Specific Ways to Improve Your XML by Elliotte Rusty Harold (page 230, Part 4, Item 41, 2nd paragraph) This seems to really stress that XML should not be used for data storage and should only be used for program to program interoperability. Personally, I disagree and .NET's app.config file that's used to store a program's settings is an example of data storage in an XML file. However for databases rather than configurations etc XML should not be used. To develop my point, I will use two examples: A) Data about customers with fields that are all on one level i.e. there are a number of fields all relating to one customer with no children B) Data about configuration of an application where nested fields and properties make a lot of sense So my question is, Is this still a valid statement and is it now acceptable to store data using XML? EDIT: I've sent an email to the author of that quote to ask for his input/extra context.

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • Change power button to 'Ask' in Xubuntu 13.10

    - by Gully.Moy
    I have recently installed Xubuntu 13.10 on my Vaio vpcea making me a Linux beginner. The problem is that laptop's power button is right on the edge of the bezel making it far too easy to press accidentally, in my opinion a design fault by Sony. At present, when I press the power button it shuts down strait away and as you can imagine, when I'm accidentally pressing it all the time it gets very annoying! So I planned to change it to ask what I would like to do when I press it or at least ask if I'm sure. So I went through the xfce GUI options "Settings Manager" - "Power Manager" to the field "When power button is pressed", but it was already set to "Ask". So I did some digging and found a thread telling me to navigate to /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-power-manager.xml where it said to find power-button-action and check that value="3". It already did. So I looked some more and found this thread which focuses on acpi scripts. I tried solution 1 & 2 using sudoedit to change the files accordingly (I have made executable bash shell scripts already so I think I followed them correctly), but still no difference. I also found this thread which instructed me to edit /etc/systemd/logind.conf so that HandlePowerKey=ignore. Still no luck. I even tried my own approach to completely disable /etc/acpi/powerbtn.sh by renaming it powerbtn.sh.bak hoping for at least no response from the power button... and I have done many reboots in between... but still it shuts down! I have also read that some people have the file /etc/acpi/events/power_button, but I do not. So does anyone have any other ideas? What else could be executing the shutdown sequence Is there something I'm missing? I haven't undone any of these actions so every one of the above files is currently edited on my computer, with the exception that "Solution 2" automatically undone "Solution 1" above. Thanks guys.

    Read the article

  • How to Implement Complex Form Data?

    - by SoulBeaver
    I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple multipart/form-data request with the following form schema (Source): Client <html> <body> <h1>File Upload with Jersey</h1> <form action="rest/file/upload" method="post" enctype="multipart/form-data"> <p> Select a file : <input type="file" name="file" size="45" /> </p> <input type="submit" value="Upload It" /> </form> </body> </html> Server @POST @Path("/upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List<FormDataBodyPart> artists = multiPart.getFields("artist"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List<FormDataBodyPart> tracks = multiPart.getFields("track"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), "Foo"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats application/x-www-form-urlencoded, multipart/form-data, or text/plain. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data.

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • changing body type without change of center of mass

    - by philipp
    I have an box2d project with some bodies in it, which move around without any user interaction. But if the user selects one of the bodies, he should be able to move it around just like he wants to. To keep it short, I want to change the type of the body to "kinematic" while the user controls it and back to "dynamic" afterwards. If I do so the rotation center of the body changes with the change of the type. How can I reset this? The body's fixture is created of a single b2PolygonShape, with its vertices set via SetAsArray(). Greetings philipp EDIT:: So I looked around about setting the local center of bodies. what brought me to this solution: var md = new b2MassData(); this.body.GetMassData( md ); this.body.SetType( b2body.b2_kinematicBody ); this.body.SetMassData( md ); that did not work, so I had a look at the source and found that SetMassData() always returns if the body is not "dynamic". So I tried this: var md = new b2MassData(); this._body.GetMassData( md ); this._body.SetType( b2Body.b2_kinematicBody ); this._body.m_sweep.localCenter.Set( md.center.x, md.center.y ); what actually is modifying the private data of the body. But it works and no errors appear, but can I really do this without the risk of breaking the application, or in other words, under which circumstances could this solution might cause errors? n.b.: I am using box2dweb of the latest release. Greetings philipp

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >