Search Results

Search found 53054 results on 2123 pages for 'sql sample database'.

Page 1005/2123 | < Previous Page | 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012  | Next Page >

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Silverlight Cream for December 13, 2010 -- #1010

    - by Dave Campbell
    In this Issue: Rénald Nollet, Benjamin Gavin, Dennis Doomen, Tim Greenfield, Mike Taulty, Jeff Blankenburg, Michael Crump, Laurent Duveau, Dragos Manolescu, KeyboardP, Yochay Kiriaty. Above the Fold: Silverlight: "Silverlight RIA Services and Basic, Anonymous Authentication" Benjamin Gavin WP7: "lving Circular Navigation in Windows Phone Silverlight Applications" Yochay Kiriaty SQL Azure: "SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB" Rénald Nollet Shoutouts: Yochay Kiriaty has a post up on the Windows Phone Devloper Blog about open source (MSPL) projects helping WP7 devs: Windows Phone Recipes – Helping the Community Jesse Liberty's latest Yet Another Podcast is up and thie time it's Joe Stagner: Yet Another Podcast #18 – Joe Stagner Josh Schwartzberg sent me this link to what is apparently his yearly web-only rock Christmas album: MetalXmas... done in Silverlight and RIA Services From SilverlightCream.com: SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB Rénald Nollet posted Part 1 of a series on a SQL Azure database manager all in Silverlight... has a live demo running, some description, and is making us wait for the next part! Silverlight RIA Services and Basic, Anonymous Authentication Benjamin Gavin has a quick post up resolving a basic RIA Services problem that I bet a lot of folks are looking for the answer on... like 500 series errors... cool little find he ferreted out... A night of Silverlight, WPF, unit testing and Caliburn Micro Dennis Doomen in concert with his employer gave a couple talks at the local DotNED user group, and covered literally a cornucopia of topics... slides, and example code for both talks... lotsa material here... Tim Greenfield on PuzzleTouch WP7 Application Tim Greenfield is the latest WP7 app developer to be interviewed by the SilverlightShow crew... lots of interesting comments and insight from Tim. Rebuilding the PDC 2010 Silverlight Application (Part 4) Mike Taulty has part 4 of his PDC 2010 Silverlight app construction project up and is taking the app into Blend, and the considerations that brought to the table. What I Learned In WP7 – Issue #2 Jeff Blankenburg continues his "What I Learned" series with this discussion about fonts, the Non-Linear Navigation service I mention below, and possible WP7 jobs. Part 3 of 4 : Tips/Tricks for Silverlight Developers Michael Crump has Part 3 of his Tips/Tricks up today. Lots of goodies this time: underlining in a TextBlock, getting browser info, startup params, VisualTreeHelper, and child windows. My Windows Phone 7 presentation in Montreal Laurent Duveau gave a WP7 presentation in Montreal as part of the Microsoft Windows Phone 7 Developer's Briefing, and has posted his materials and slide deck WP7 Code: Mocking Event Streams with IEnumerable Dragos Manolescu has a very cool post up on using IEnumerable to Mock event streams by leveraging the IObservable/IEnumerable duality, and uses the 2D bubble app that you can run and test in the emulator without needing an accelerometer Transparent Wallpapers – Video Tutorial KeyboardP has had so many queries about his Transparent wallpaper for WP7 that he produced a video tutorial for it... Solving Circular Navigation in Windows Phone Silverlight Applications Yochay Kiriaty discusses the first recipe they are releasing ... see the shoutout above, a Nonlinear Navigation Service ... to help with apps that have loops in navigation. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How do I restart MySQL in monit when page contains specific text?

    - by Tyler
    How do I check if a web page contains the text "Error connecting to database" and if the text exists in the page restart the database? Here's what I have so far but it isn't working: check host website.com with address website.com group database start program = "/usr/bin/service mysql start" stop program = "/usr/bin/service mysql stop" if url http://website.com content == "Error connecting to database" then restart

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • Fixing some Visual Studio RC install issues

    - by terje
    The Visual Studio RC has shown some install issues in some cases, particularly for those who upgrades from VS 11 Beta.  I have listed the fixes known now below, and will update if there are more issues.  Note that a repair will not fix the issue, and a Windows restore and subsequent reinstall may not fix it either.  The system seems to remember too much. That was the case for me, at least.  The fixes below however, cures these issues. 1. The Team Explorer Build node doesn’t work You get an error saying System.TypeLoadException like this: To solve this do as follows: 1. Open a command prompt as administrator 2. Go to your program files directory for VS 2012 and down to  the extension folder like:   C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer 3. Run “gacutil –if Microsoft.TeamFoundation.Build.Controls.dll     2. The SQL Editor gives loading error When you start up VS 2012 RC you get a loading error message.  The same happens if you try to go from the menu to  SQL/Transact-SQL Editor/New Query.    To solve this do as follows: 1. Open Control Panel/Programs and Features 2. Locate the “Microsoft SQL Server 2012 Data-Tier App Framework     (Note , you might find up to 4 such instances) The ones with version numbers ending in 55 is from the SQL 2012 RC, the ones ending in 60 is from the SQL 2012 RTM.  There are two of each, one for x32 and one for x64.  Which is which no one knows. 3. Right click each of them, and select Repair. (It would be nice if someone with this issue tries only the latest RTM ones, and see if that clears the error, and comment back to this post. I am out of non-functioning VS’s )   3.  Errors referring to some extension You get errors referring to some extension that can’t be loaded, or can’t be found.  Check the activity log (see below), and verify there.  If you see yellow collision warnings there, the fix here should solve those too. To solve these:    1. Open a Visual Studio 2012 command prompt 2.  Run:   devenv /resetsettings     How to check for errors using the log Do as follows to get to the activity log for Visual studio 2012 RC 1. Open a Visual Studio 2012 command prompt 2. Run:   devenv /log This starts up Visual Studio.  3. Go to %appdata%/Microsoft\VisualStudio\11.0 4. Double click the file named ActivityLog.xml.  It will start up in your browser, and be formatted using the xslt in the same directory. 5.  Look for items marked in red.  Example for Issue 1 :

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • CSS Equivalent of Table Rowspan with Fluid Height

    - by Gabe
    I'm trying to accomplish the following using CSS: <table border="1" width="300px"> <tr> <td rowspan="2">This row should equal the height (no fixed-height allowed) of the 2 rows sitting to the right.</td> <td>Here is some sample text. And some additional sample text.</td> </tr> <tr> <td>Here is some sample text. And some additional sample text.</td> </tr> </table> The examples I've seen for accomplishing this utilize fixed heights or allow the content to wrap around the left column. Is there an elegant way to accomplish this using CSS?

    Read the article

  • T-SQL Script to Delete All The Relationships Between A Bunch Of Tables in a Schema and Other Bunch i

    - by Galilyou
    Guys, I have a set of tables (say Account, Customer) in a schema (say dbo) and I have some other tables (say Order, OrderItem) in another schema (say inventory). There's a relationship between the Order table and the Customer table. I want to delete all the relationships between the tables in the first schema (dbo) and the tables in the second schema (inventory), without deleting the relationships between tables inside the same schema. Is that possible? Any help appreciated.

    Read the article

  • Circular reference error when outputting LINQ to SQL entities with relationships as JSON in an ASP.N

    - by roosteronacid
    Here's a design-view screenshot of my dbml-file. The relationships are auto-generated by foreign keys on the tables. When I try to serialize a query-result into JSON I get a circular reference error..: public ActionResult Index() { return Json(new DataContext().Ingredients.Select(i => i)); } But if I create my own collection of "bare" Ingredient objects, everything works fine..: public ActionResult Index() { return Json(new Entities.Ingredient[] { new Entities.Ingredient(), new Entities.Ingredient(), new Entities.Ingredient() }); } ... Also; serialization works fine if I remove the relationships on my tables. How can I serialize objects with relationships, without having to turn to a 3rd-party library? I am perfectly fine with just serializing the "top-level" objects of a given collection.. That is; without the relationships being serialized as well.

    Read the article

  • Exception using SQLiteDataReader

    - by galford13x
    I'm making a Custom SQLite Wrapper. This is meant to allow a presistent connection to a database. However, I receive an exception when calling this function twice. public Boolean DatabaseConnected(string databasePath) { bool exists = false; if (ConnectionOpen()) { this.Command.CommandText = string.Format(DATABASE_QUERY); using (reader = this.Command.ExecuteReader()) { while (reader.Read()) { if (string.Compare(reader[FILE_NAME_COL_HEADER].ToString(), databasePath, true) == 0) { exists = true; break; } } reader.Close(); } } return exists; } I use the above function to check if the database is currently open before executing a command or trying to open a database. The first time I execute the function, it executes with no issue. After that the reader = this.Command.ExecuteReader() throws an exception Object reference not set to an instance of an object. StackTrace: at System.Data.SQLite.SQLiteStatement.Dispose() at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt) at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader() at EveTraderApi.Database.SQLDatabase.DatabaseConnected(String databasePath) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 579 at EveTraderApi.Database.SQLDatabase.OpenSQLiteDB(String filename) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 119 at EveTraderApiExample.Form1.CreateTableDataTypes() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 89 at EveTraderApiExample.Form1.Button1_ExecuteCommand(Object sender, EventArgs e) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 35 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at EveTraderApiExample.Program.Main() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • In SQL Server Business Intelligence, why would I create a report model from an OLAP cube?

    - by ngm
    In Business Intelligence Developer Studio, I'm wondering why one would want to create a report model from an OLAP cube. As far as I understand it, OLAP cubes and report models are both business-oriented views of underlying structures (usually relational databases) that may not mean much to a business user. The cube is a multidimensional view in terms of dimensions and measures, and the report model is... well I'm not sure entirely -- is it a more business-oriented, but still essentially relational view? Anyway, in Report Builder I can connect directly to both an OLAP cube or a report model. So I don't see why, if I have an OLAP cube which already provides a business-oriented view of the data suitable for end-users, why I would then convert that to a report model and use that in Report Builder instead. I think I'm obviously missing some fundamental difference between report models and cubes -- any help appreciated!

    Read the article

  • How to loop through columns in an oracle pl/sql cursor.

    - by Lloyd
    I am creating a dynamic cursor and I would like to loop over the columns that exist in the cursor. How would I do that? For example: create or replace procedure dynamic_cursor(empid in varchar2, RC IN OUT sys_refcursor) as stmt varchar2(100); begin stmt := 'select * from employees where id = ' || empid; open crs for stmt using val; for each {{COLUMN OR SOMETHING}} --TODO: Get this to work loop; end;

    Read the article

  • svcutil, WSDL, and the generated interfaces not being sufficient for implementation

    - by chtmd
    I have a WSDL file defining a service that I have to implement in WCF. I had read that I could generate the proxy using svcutil from the WSDL file, and that I could then use the generated interfaces to implement the service. Unfortunately, I can't quite seem to find a way to have the interfaces contain the correct attributes to expose the contracts. All operations have the "OperationContractAttribute" attribute, but it appears as though for the service to be exposed, I require the "OperationContract" for each one. Same thing with "ServiceContractAttribute" and "ServiceContract", and I imagine DataContract, but I haven't gotten that far. I could manually make these changes, but I would much prefer a technique where the existing code could be easily used, or better code could be generated for my uses. Is there some way that this can be done? Thanks. EDIT: Command used: svcutil ObjectManagerService.wsdl /n:*,Sample /o:ObjectManagerServiceProxy.cs /nologo Code sample: public interface ObjectManagerSyncPortType { // CODEGEN: Generating message contract since the operation createObject is neither RPC nor document wrapped. [System.ServiceModel.OperationContractAttribute(Action="http://www.sample.com/createObject", ReplyAction="*")] [System.ServiceModel.XmlSerializerFormatAttribute()] Sample.createObjectResponse1 createObject(Sample.createObjectRequest1 request); As best as I can tell/see the WSDL file is entirely self-contained and requires no additional XSD files.

    Read the article

  • java Finalize method call

    - by Rajesh Kumar J
    The following is my Class code import java.net.*; import java.util.*; import java.sql.*; import org.apache.log4j.*; class Database { private Connection conn; private org.apache.log4j.Logger log ; private static Database dd=new Database(); private Database(){ try{ log= Logger.getLogger(Database.class); Class.forName("com.mysql.jdbc.Driver"); conn=DriverManager.getConnection("jdbc:mysql://localhost/bc","root","root"); conn.setReadOnly(false); conn.setAutoCommit(false); log.info("Datbase created"); /*Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); conn=DriverManager.getConnection("jdbc:odbc:rmldsn"); conn.setReadOnly(false); conn.setAutoCommit(false);*/ } catch(Exception e){ log.info("Cant Create Connection"); } } public static Database getDatabase(){ return dd; } public Connection getConnection(){ return conn; } @Override protected void finalize()throws Throwable { try{ conn.close(); Runtime.getRuntime().gc(); log.info("Database Close"); } catch(Exception e){ log.info("Cannot be closed Database"); } finally{ super.finalize(); } } } This can able to Initialize Database Object only through getDatabase() method. The below is the program which uses the single Database connection for the 4 threads. public class Main extends Thread { public static int c=0; public static int start,end; private int lstart,lend; public static Connection conn; public static Database dbase; public Statement stmt,stmtEXE; public ResultSet rst; /** * @param args the command line arguments */ static{ dbase=Database.getDatabase(); conn=dbase.getConnection(); } Main(String s){ super(s); try{ stmt=conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); start=end; lstart=start; end=end+5; lend=end; System.out.println("Start -" +lstart +" End-"+lend); } catch(Exception e){ e.printStackTrace(); } } @Override public void run(){ try{ URL url=new URL("http://localhost:8084/TestWeb/"); rst=stmt.executeQuery("SELECT * FROM bc.cdr_calltimestamp limit "+lstart+","+lend); while(rst.next()){ try{ rst.updateInt(2, 1); rst.updateRow(); conn.commit(); HttpURLConnection httpconn=(HttpURLConnection) url.openConnection(); httpconn.setDoInput(true); httpconn.setDoOutput(true); httpconn.setRequestProperty("Content-Type", "text/xml"); //httpconn.connect(); String reqstring="<?xml version=\"1.0\" encoding=\"US-ASCII\"?>"+ "<message><sms type=\"mt\"><destination messageid=\"PS0\"><address><number" + "type=\"international\">"+ rst.getString(1) +"</number></address></destination><source><address>" + "<number type=\"unknown\"/></address></source><rsr type=\"success_failure\"/><ud" + "type=\"text\">Hello World</ud></sms></message>"; httpconn.getOutputStream().write(reqstring.getBytes(), 0, reqstring.length()); byte b[]=new byte[httpconn.getInputStream().available()]; //System.out.println(httpconn.getContentType()); httpconn.getInputStream().read(b); System.out.println(Thread.currentThread().getName()+new String(" Request"+rst.getString(1))); //System.out.println(new String(b)); httpconn.disconnect(); Thread.sleep(100); } catch(Exception e){ e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+" "+new java.util.Date()); } catch(Exception e){ e.printStackTrace(); } } public static void main(String[] args) throws Exception{ System.out.println(new java.util.Date()); System.out.println("Memory-before "+Runtime.getRuntime().freeMemory()); Thread t1=new Main("T1-"); Thread t2=new Main("T2-"); Thread t3=new Main("T3-"); Thread t4=new Main("T4-"); t1.start(); t2.start(); t3.start(); t4.start(); System.out.println("Memory-after "+Runtime.getRuntime().freeMemory()); } } I need to Close the connection after all the threads gets executed. Is there any good idea to do so. Kindly help me out in getting this work.

    Read the article

  • adv. styling list with css

    - by Ayrton
    Hi I need to style a regular html list like the following picture: as you see each each list item has a padding on the sides and a top&bottom border. When hovered the border has a width of 100% of the <ul> item. Now the problem actually is: when you give each <li> element a top & bottom border I have a border of 2 px between each element (bottom border from the first element and the top border from the second element), I don't want that however I do not know any solution for this. my html: <div id="tab_top" class="tab"> <div class="bottom"> <div class="cont"> <ul> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> <li><a href="#">Here’s a Sample Post <span class="ct">32</span></a></li> </ul> </div> </div> </div> my css: #tabs .tab div.cont ul li a{line-height:30px; height:30px; color:#3ca097; display:block; padding-left:11px; padding-right:13px; width:259px;} #tabs .tab div.cont ul li a span.ct{float:right;background:url(images/count_comments.gif) no-repeat left top; height:13px; padding-left:16px; margin-top:10px; line-height:12px;} #tabs .tab div.cont ul li a:hover{color:#fff; background-color:#6fd2c8; border-top:1px solid #7db9b2; border-bottom:1px solid #7db9b2; height:28px; line-height:28px;} #tabs .tab div.cont ul li a:hover span.ct{background-position:left bottom; color:#23665f; margin-top:9px;} I would be pleased if you can help me Yours truthfully

    Read the article

  • DrScheme versus mzscheme: treatment of definitions

    - by speciousfool
    One long term project I have is working through all the exercises of SICP. I noticed something a bit odd with the most recent exercise. I am testing a Huffman encoding tree. When I execute the following code in DrScheme I get the expected result: (a d a b b c a) However, if I execute this same code in mzscheme by calling (load "2.67.scm") or by running mzscheme -f 2.67.scm, it reports: symbols: expected symbols as arguments, given: (leaf D 1) My question is: why? Is it because mzscheme and drscheme use different rules for loading program definitions? The program code is below. ;; Define an encoding tree and a sample message ;; Use the decode procedure to decode the message, and give the result. (define (make-leaf symbol weight) (list 'leaf symbol weight)) (define (leaf? object) (eq? (car object) 'leaf)) (define (symbol-leaf x) (cadr x)) (define (weight-leaf x) (caddr x)) (define (make-code-tree left right) (list left right (append (symbols left) (symbols right)) (+ (weight left) (weight right)))) (define (left-branch tree) (car tree)) (define (right-branch tree) (cadr tree)) (define (symbols tree) (if (leaf? tree) (list (symbol-leaf tree)) (caddr tree))) (define (weight tree) (if (leaf? tree) (weight-leaf tree) (cadddr tree))) (define (decode bits tree) (define (decode-1 bits current-branch) (if (null? bits) '() (let ((next-branch (choose-branch (car bits) current-branch))) (if (leaf? next-branch) (cons (symbol-leaf next-branch) (decode-1 (cdr bits) tree)) (decode-1 (cdr bits) next-branch))))) (decode-1 bits tree)) (define (choose-branch bit branch) (cond ((= bit 0) (left-branch branch)) ((= bit 1) (right-branch branch)) (else (error "bad bit -- CHOOSE-BRANCH" bit)))) (define (test s-exp) (display s-exp) (newline)) (define sample-tree (make-code-tree (make-leaf 'A 4) (make-code-tree (make-leaf 'B 2) (make-code-tree (make-leaf 'D 1) (make-leaf 'C 1))))) (define sample-message '(0 1 1 0 0 1 0 1 0 1 1 1 0)) (test (decode sample-message sample-tree))

    Read the article

< Previous Page | 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012  | Next Page >