Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 194/446 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Centralized Exception handling for Eclipse plug-in

    - by Svilen
    Hello, At first I thought this would be question often asked, however trying (and failing) to look up info on this proved me wrong. Is there a mechanism in Eclipse platform for centralized exception handling of exceptions? For example... You have plug-in project which connects to a DB and issues queries, results of which are used to populate some e.g. views. This is like the most common example ever. :) Queries are executed almost for any user action, from every UI control the plug-in provides. Most likely the DB Query API will have some specific to the DB SomeDBGeneralException declared as being thrown by it. That's OK, you can handle those according to whatever your software design is. But how about unchecked exceptions which are likely to occur, e.g. , when communication with DB suddenly breaks for some network related reason? What if in such case one would like to catch those exceptions in a central place and for example provide user friendly message to the user (rather than the low level communication protocol api messages) and even some possible actions the user could execute in order to deal with the specific problem? Thinking in Eclipse platform context, the question may be rephrased as "Is there an extension point like "org.eclipse.ExceptionHandler" which allows to declare exception handlers for specific (some kind of filtering support) exceptions giving a lot of flexibility with the actual handling?"

    Read the article

  • Visual Studio and .NET programming

    - by Vit
    Hi, I just want to ask wheather I am right or not about .NET. So, .NET is new framework that enables you to easily implement new and old windows functions. It is similiar to java in the way that its also compiled into "bytecode", but its name is Common Language Infrastructure, or CLI. This language is interpreted by .NET Framework, so code generated by programming using .NET cannot be executed directly by CPU. Now, few languages can be compiled to CLI. First, it was Microsoft-developed C#, than J#, C++ others. I suspect that this is in general right, at least I hope I understand it right. But, what I am still missing is, can you write to machine code compiled code in C#? And, if using Visual Studio 2005, when I select Win32 project, it is compiled into machine code, so only thing you need to run this apps are windows dynamic-link libraries, since static libraries code is implemented into app durink linking phase. And those dynamic-link libraries are implemented in every windows installation, or provided by DirectX installations. But when I select CLR in Visual Studio 2005, than app is compiled into CLI code, and it first executes .NET framework, and than .NET framework executes that program, since its not in machine code. So, I am right? I ask becouse you can read these infos on the internet, but I have noone to tell me wheather I understand it right or not. Thanks.

    Read the article

  • active record relations – who needs it?

    - by M2_
    Well, I`m confused about rails queries. For example: Affiche belongs_to :place Place has_many :affiches We can do this now: @affiches = Affiche.all( :joins => :place ) or @affiches = Affiche.all( :include => :place ) and we will get a lot of extra SELECTs, if there are many affiches: Place Load (0.2ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.3ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.8ms) SELECT "places".* FROM "places" WHERE "places"."id" = 444 LIMIT 1 Place Load (1.0ms) SELECT "places".* FROM "places" WHERE "places"."id" = 222 LIMIT 1 ...and so on... And (sic!) with :joins used every SELECT is doubled! Technically we cloud just write like this: @affiches = Affiche.all( ) and the result is totally the same! (Because we have relations declared). The wayout of keeping all data in one query is removing the relations and writing a big string with "LEFT OUTER JOIN", but still there is a problem of grouping data in multy-dimentional array and a problem of similar column names, such as id. What is done wrong? Or what am I doing wrong? UPDATE: Well, i have that string Place Load (2.5ms) SELECT "places".* FROM "places" WHERE ("places"."id" IN (3,444,222,57,663,32,154,20)) and a list of selects one by one id. Strange, but I get these separate selects when I`m doing this in each scope: <%= link_to a.place.name, **a.place**( :id => a.place.friendly_id ) %> the marked a.place is the spot, that produces these extra queries.

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How can I cluster short messages [Tweets] based on topic ? [Topic Based Clustering]

    - by Jagira
    Hello, I am planning an application which will make clusters of short messages/tweets based on topics. The number of topics will be limited like Sports [ NBA, NFL, Cricket, Soccer ], Entertainment [ movies, music ] and so on... I can think of two approaches to this Ask for users to tag questions like Stackoverflow does. Users can select tags from a predefined list of tags. Then on server side I will cluster them on based of tags. Pros:- Simple design. Less complexity in code. Cons:- Choices for users will be restricted. Clusters will not be dynamic. If a new event occurs, the predefined tags will miss it. Take the message, delete the stopwords [ predefined in a dictionary ] and apply some clustering algorithm to make a cluster and depending on its popularity, display the cluster. The cluster will be maintained according to its sustained popularity. New messages will be skimmed and assigned to corresponding clusters. Pros:- Dynamic clustering based on the popularity of the event/accident. Cons:- Increased complexity. More server resources required. I would like to know whether there are any other approaches to this problem. Or are there any ways of improving the above mentioned methods? Also suggest some good clustering algorithms.I think "K-Nearest Clustering" algorithm is apt for this situation.

    Read the article

  • select all values from a dimension for which there are facts in all other dimensions

    - by ideasculptor
    I've tried to simplify for the purposes of asking this question. Hopefully, this will be comprehensible. Basically, I have a fact table with a time dimension, another dimension, and a hierarchical dimension. For the purposes of the question, let's assume the hierarchical dimension is zip code and state. The other dimension is just descriptive. Let's call it 'customer' Let's assume there are 50 customers. I need to find the set of states for which there is at least one zip code in which EVERY customer has at least one fact row for each day in the time dimension. If a zip code has only 49 customers, I don't care about it. If even one of the 50 customers doesn't have a value for even 1 day in a zip code, I don't care about it. Finally, I also need to know which zip codes qualified the state for selection. Note, there is no requirement that every zip code have a full data set - only that at least one zip code does. I don't mind making multiple queries and doing some processing on the client side. This is a dataset that only needs to be generated once per day and can be cached. I don't even see a particularly clean way to do it with multiple queries short of simply brute-force iteration, and there are a heck of a lot of 'zip codes' in the data set (not actually zip codes, but the there are approximately 100,000 entries in the lower level of the hierarchy and several hundred in the top level, so zipcode-state is a reasonable analogy)

    Read the article

  • BlackBerry Deployment Strategies

    - by cagreen
    I'm new to large scale BB app deployment and I'm looking for some clarification on the various methods of deployment. Please bear with me as I'm sure there is more to it than my naive view would lead me to believe. My app is very targeted to corporate users and requires a subscription to some additional services before it can be used. In other words, it's not targeted towards the consumer market, so I'm not worried about people not being able to easily find it online. What do I need to be aware of when looking at deployment strategies? Any gotchas? From my understanding my choices are: - App World small upfront vendor fee users can easily search for and find my app billing handled by RIM 4 licensing models (static, single, pool, dynamic). Though I'm not sure I've seen enough info on the pool and dynamic to fully appreciate how it might help me. - Download from my website billing is handled by me can I enforce the number of licenses that are in use within an organization? is this easier/harder for a user? - What else am I missing?

    Read the article

  • HOWTO: implement a jQuery version of ASP.Net MVC "Strongly Typed Partial Views"

    - by Sam Carleton
    I am working on a multi-page assessment form where the questions/responses are database driven. Currently I the basic system working with Html.BeginForm via standard ASP.Net MVC. At this point in time, the key to the whole system is the 'Strongly Typed Partial Views'. When the question/response is read from the database, the response type determines which derived model is created and added to the collection. The main view it iterates through the collection and uses the 'Strongly Typed Partial Views' system of ASP.Net MVC to determine which view to render the correct type of response (radio button, drop down, or text box). I would like to change this process from a Html.BeginForm to Ajax.BeginForm. The problem is I don't have a clue as to how to implement the dynamic creation of the question/response in the JavaScript/jQuery world. Any thoughts and/or suggestions? Here is the current code to generate the dynamic form: @using (Html.BeginForm(new { mdsId = @Model.MdsId, sectionId = @Model.SectionId })) { <div class="SectionTitle"> <span>Section @Model.SectionName - @Model.SectionDescription</span> <span style="float: right">@Html.CheckBoxFor(x => x.ShowUnansweredQuestions) Show only unaswered questions</span> </div> @Html.HiddenFor(x => x.PrevSectionId) @Html.HiddenFor(x => x.NextSectionId) for (var i = 0; i < Model.answers.Count(); i++) { @Html.EditorFor(m => m.answers[i]); } }

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • JavaScript: Very strange behavior with assigning methods in a loop

    - by Andrey
    Consider this code below: <a href="javascript:void(-1)" id="a1">a1</a> <a href="javascript:void(-1)" id="a2">a2</a> <script type="text/javascript"> var buttons = [] buttons.push("a1") buttons.push("a2") var actions = [] for (var i in buttons) { actions[buttons[i]] = function() { alert(i) } } var elements = document.getElementsByTagName("a") for (var k = 0; k < elements.length; k++) { elements[k].onclick = actions[elements[k].id] } </script> Basically, it shows two anchors, a1 and a2, and I expect to see "1" and "2" popping up in an alert when clicking on corresponding anchor. It doesn't happen, I get "2" when clicking on either. After spending an hour meditating on the code, I decided that it probably happens because dynamic onclick methods for both anchors keep the last value of "i". So I change that loop to for (var i in buttons) { var local_i = i.toString() actions[buttons[i]] = function() { alert(local_i) } } hoping that each dynamic function will get its own copy of "i" with immediate value. But after this change I get "1" popping up when I click on either link. What am I doing wrong? It's a huge show stopper for me.

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • How to systematically generate images from data?

    - by adamvickers
    I work for a performing arts nonprofit. We have seating charts for each of the theaters we work with; each seating chart shows the number of sections, the shape of each section, and the number of rows in each section. We'd like to create dynamic seating charts based on this info. We'd like them to look/feel kinda like this: http://www.fansnap.com/tickets/177754-on. But the tricky part is we'd like to be able to store all the info about each theater (the section names, shape/size of each section, and number of rows in each section) as data and then build a system that reads this data and uses it to create a dynamic map. I'm a life-long web developer, but I don't have have any experience with a difficult graphics problem like this. I realize it's a complex problem and I don't expect anyone to give me a complete answer here, but I would love direction on where I should be looking for more info. Is what I'm describing possible? Does this sort of technique have a name? Where can I learn more about how to accomplish this? What software should I use? Any info would be helpful.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • How to make a div extend when floating it?

    - by cjmcjm
    I'm trying to create a bar with a dynamic horizontal width. The backgrounds are transparent pngs so they can't overlap. I have one for the left side, one to repeat-x across the dynamic width middle and then another bg for the right. Here is kinda what I have so far... .bar{ width: 100%; } .left{ width: 50px; height: 50px; float: left; } .mid{ height: 50px; float: left; } .right{ width: 50px; height: 50px; float: right; } <div class="bar"> <div class="left"></div> <div class="mid"></div> <div class="right"></div> </div> So the main problem is extending the .mid all the way across to meet the right, width: 100% doesn't work. The other problems is what can I do if I have content that needs to overlap the .left and .mid divs? Set up another div and use z-index? Thanks so much!

    Read the article

  • Is there a way to automatically load navigational property using the .NET Entity Framework?

    - by René Wolferink
    Stepping away more and more from writing SQL for my applications, I decided to give the Entity Framework a try. However, I've run into something I believe is causing me to write more code than I think is strictly necessary. When I accessed some navigational properties, I discovered that all many-to-one relations (simple references) were null and all one-to-many and many-to-many relations (EntityCollections) were empty. For example: I have a User with a reference to a Group. When I have retieved a User, by using a simple select-by-id, the Group property is null. If I want to access the Group I have to manually load it (using User.GroupReference.Load()). So I added a GetGroup() method in User which checks whether the Group is loaded already and, if not, does so and then returns the Group. Now this will result in a lot of highly similar methods for all navigational properties. And it all results in the navigational properties not being used, only my custom-made Get"PropertyName"() method's are now being used. I don't want to expand my queries (linq to entities) to immediately load all these properties, because it's not always known at first what is needed. And besides, it would cause a lot of queries to have to be made. Is there a way to configure the Entity Framework to load these objects when they happen to not be present? So when I access User.Group and the group is not loaded yet, it is loaded automatically? Or am I stuck using my own Get"PropertyName"() method's as long as I'm trying to load objects only on demand (or "just-in-time")? Some extra info: I'm using VS2008 SP1 with .NET 3.5 SP1. The Entity Framework I'm using is the one that got shipped with it.

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • MySQL: How to pull information from multiple tables based on information in other tables?

    - by Greg
    Ok, I have 5 tables which I need to pull information from based on one variable. gameinfo id | name | platforminfoid gamerinfo id | name | contact | tag platforminfo id | name | abbreviation rosterinfo id | name | gameinfoid rosters id | gamerinfoid | rosterinfoid The 1 variable would be gamerinfo.id, which would then pull all relevant data from gamerinfo, which would pull all relevant data from rosters, which would pull all relevant data from rosterinfo, which would pull all relevant data from gameinfo, which would then pull all relevant data from platforminfo. Basically it breaks down like this: gamerinfo contains the gamers basic information. rosterinfo contains basic information about the rosters (ie name and the game the roster is aimed towards) rosters contains the actual link from the gamer to the different rosters (gamers can be on multiple rosters) gameinfo contains basic information about the games (ie name and platform) platform info contains information about the different platforms the games are played on (it is possible for a game to be played on multiple platforms) I am pretty new to SQL queries involving JOINs and UNIONs and such, usually I would just break it up into multiple queries but I thought there has to be a better way, so after looking around the net, I couldn't find (or maybe I just couldn't understand what I was looking at) what I was looking for. If anyone can point me in the right direction I would be most grateful.

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • Analyzing an IronPython Scope

    - by Vercinegetorix
    I'm trying to write C# code with an embedded IronPython script. Then want to analyze the contents of the script, i.e. list all variables, functions, class and their members/methods. There's an easy way to start, assuming I've got a scope defined and code executed in it already: dynamic variables=pyScope.GetVariables(); foreach (string v in variables) { dynamic dynamicV=pyScope.GetVariable(); /*seems to return everything. variables, functions, classes, instances of classes*/ } But how do I figure out what the type of a variable is? For the following python 'objects', dynamicV.GetType() will return different values: x=5 --system.Int32 y="asdf" --system.String def func():... --IronPython.Runtime.PythonFunction z=class() -- IronPython.Runtime.Types.OldInstance, how can I identify what the actual python class is? class NewClass -- throws an error, GetType() is unavailable. This is almost what I'm looking for. I could capture the exception thrown when unavailable and assume it's a class declaration, but that seems unclean. Is there a better approach? To discover the members/methods of a class it looks like I can use: ObjectOperations op = pyEngine.Operations; object instance = op.Call("className"); foreach (string j in op.GetMemberNames("className")) { object member=op.GetMember(instance, j); Console.WriteLine(member.GetType()); /*once again, GetType() provides some info about the type of the member, but returns null sometimes*/ } Also, how do I get the parameters to a method? Thanks!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >