Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 618/1698 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • SQL: Join vs. subquery

    - by Col. Shrapnel
    I am an old-school MySQL user and always preferred JOIN over sub-query. But nowadays everyone uses sub-query and I hate it, dunno why. Though I've lack of theoretical knowledge to judge myself if there are any difference. Well, I am curious if sub-query as good as join and there is no thing to worry about?

    Read the article

  • exec problem in sql 2005

    - by IordanTanev
    Hi, i have the situation where i have two databases whith same structure. The first have some data in its datatables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR Select TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when i run th script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When i copy this and run it from Management studio it works fine. But when the script trys to run it with exec i get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Why does LINQ-to-SQL Paging fail inside a function?

    - by ssg
    Here I have an arbitrary IEnumerable<T>. And I'd like to page it using a generic helper function instead of writing Skip/Take pairs every time. Here is my function: IEnumerable<T> GetPagedResults<T>(IEnumerable<T> query, int pageIndex, int pageSize) { return query.Skip((pageIndex - 1) * pageSize).Take(pageSize); } And my code is: result = GetPagedResults(query, 1, 10).ToList(); This produces a SELECT statement without TOP 10 keyword. But this code below produces the SELECT with it: result = query.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList(); What am I doing wrong in the function?

    Read the article

  • DBLinq not generating where clause

    - by sipwiz
    I'm testing out DBLinq-0.18 and DBLinq from SVN Trunk with MySQL and Postgresql. I'm only using a very simple query but on both database DBLinq is not generating a Where clause. I have confirmed this by turning on statement logging on Postgresql to check exactly what request DBLinq is sending. My Linq query is: MyDB db = new MyDB(new NpgsqlConnection("Database=database;Host=localhost;User Id=postgres;Password=password")); var customers = from customer in db.Customers where customer.CustomerUserName == "test" select customer; The query works ok but the SQL generated by DBLinq is of the form: select customerusername, customerpassword .... from public.customers There is no Where clause which means DBLinq must be pulling the whole table down before running the Linq query. Has anyone had any experience with DBLinq and know what I could be doing wrong?

    Read the article

  • PHP calling PostgreSQL function - type issue?

    - by CitrusTree
    I have a function in PostgreSQL / plpgsql with the following signature: CREATE OR REPLACE FUNCTION user_login(TEXT, TEXT) RETURNS SETOF _get_session AS $$ ... $$ Where _get_session is a view. The function works fine when calling it from phpPgAdmin, however whan I call it from PHP I get the following error: Warning: pg_query() [function.pg-query]: Query failed: ERROR: type "session_ids" does not exist CONTEXT: compile of PL/pgSQL function "user_login" near line 2 in /home/sites/blah.com/index.php on line 69 The DECLARE section of the function contains the following variables: oldSessionId session_ids := $1; newSessionId session_ids := $2; The domain session_ids DOES exist, and other functions which use the same domain work when called from the same script. The PHP is as follows: $query = "SELECT * FROM $dbschema.user_login('$session_old'::TEXT, '$session'::TEXT)"; $result = pg_query($login, $query); I have also tried this using ::session_ids in place of ::TEXT when calling the function, however I recieve the same error. Help :o(

    Read the article

  • Build dynamic LINQ?

    - by d daly
    Hi Im using #LINQ# to query data, but can anyone tell me how to build the query dynamically, if the user only wants to report on say 1 of the 3 fields? (see below) Thanks DD var query = from cl in db.tblClaims join cs in db.tblCases on cl.ref_no equals cs.ref_no where cl.claim_status == "Appeal" && cl.appeal_date >= Convert.ToDateTime(txtReferedFromDate.Text) && cl.appeal_date <= Convert.ToDateTime(txtReferedToDate.Text) && cs.referred_from_lho == dlLHO.Text && cs.adviser == dlAdviser.Text select new { Ref = cs.ref_no, ClaimType = cl.claim_type, ClaimStatus = cl.claim_status, AppealDate = cl.appeal_date }; gvReport.DataSource = query;

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • Hibernate HQL with interfaces

    - by Benju
    According to this section of the Hibernate documentation I should be able to query any java class in HQL http://docs.jboss.org/hibernate/core/3.3/reference/en/html/queryhql.html#queryhql-polymorphism Unfortunately when I run this query... "from Transaction trans where trans.envelopeId=:envelopeId" I get the message "Transaction is not mapped [from Transaction trans where trans.envelopeId=:envelopeId]". Transaction is an interface, I have to entity classes that implement it, I want on HQL query to return a Collection of type Transaction.

    Read the article

  • Compiled Queries and "Parameters cannot be sequences"

    - by David B
    I thought that compiled queries would perform the same query translation as DataContext. Yet I'm getting a run-time error when I try to use a query with a .Contains method call. Where have I gone wrong? //private member which holds a compiled query. Func<DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> compiledFiftyRecordQuery = System.Data.Linq.CompiledQuery.Compile <DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> ((dc, ids) => dc.TestRecords.Where(tr => ids.Contains(tr.ID)).ToList()); //this method calls the compiled query. public void FiftyRecordCompiledQueryByID() { List<int> IDs = GetRandomInts(50); //System.NotSupportedException //{"Parameters cannot be sequences."} List<DataAccess.TestRecord> results = compiledFiftyRecordQuery (myContext, IDs); }

    Read the article

  • Split function in where clause

    - by abhishek-khandelwal
    hello friends I am using following query in linq In product table following type of data are stored abc-def bcd=fgh abc-xyz var query=from prod in db.Product join cat in db.category on prod.categoryId=cat.categoryID where prod.productName.split('-')[0]=="abc" but in that query it product annoumous problem Please give some suggestion to split in where caluse

    Read the article

  • NHibernate FetchMode.Lazy

    - by RyanFetz
    I have an object which has a property on it that has then has collections which i would like to not load in a couple situations. 98% of the time i want those collections fetched but in the one instance i do not. Here is the code I have... Why does it not set the fetch mode on the properties collections? [DataContract(Name = "ThemingJob", Namespace = "")] [Serializable] public class ThemingJob : ServiceJob { [DataMember] public virtual Query Query { get; set; } [DataMember] public string Results { get; set; } } [DataContract(Name = "Query", Namespace = "")] [Serializable] public class Query : LookupEntity<Query>, DAC.US.Search.Models.IQueryEntity { [DataMember] public string QueryResult { get; set; } private IList<Asset> _Assets = new List<Asset>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Asset> Assets { get { return _Assets; } set { _Assets = value; } } private IList<Theme> _Themes = new List<Theme>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Theme> Themes { get { return _Themes; } set { _Themes = value; } } private IList<Affinity> _Affinity = new List<Affinity>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Affinity> Affinity { get { return _Affinity; } set { _Affinity = value; } } private IList<Word> _Words = new List<Word>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Word> Words { get { return _Words; } set { _Words = value; } } } using (global::NHibernate.ISession session = NHibernateApplication.GetCurrentSession()) { global::NHibernate.ICriteria criteria = session.CreateCriteria(typeof(ThemingJob)); global::NHibernate.ICriteria countCriteria = session.CreateCriteria(typeof(ThemingJob)); criteria.AddOrder(global::NHibernate.Criterion.Order.Desc("Id")); var qc = criteria.CreateCriteria("Query"); qc.SetFetchMode("Assets", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Themes", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Affinity", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Words", global::NHibernate.FetchMode.Lazy); pageIndex = Convert.ToInt32(pageIndex) - 1; // convert to 0 based paging index criteria.SetMaxResults(pageSize); criteria.SetFirstResult(pageIndex * pageSize); countCriteria.SetProjection(global::NHibernate.Criterion.Projections.RowCount()); int totalRecords = (int)countCriteria.List()[0]; return criteria.List<ThemingJob>().ToPagedList<ThemingJob>(pageIndex, pageSize, totalRecords); }

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Dynamic Linq Library Guid exceptions

    - by Adan
    I am having a problem with the Dynamic Linq Library. I get a the following error "ParserException was unhandled by user code ')" or ','". I have a Dicitionary and I want to create a query based on this dictionary. So I loop through my dictionary and append to a string builder "PersonId = (GUID FROM DICTIONARY). I think the problem is were I append to PersonId for some reason I can't seem to convert my string guid to a Guid so the dynamic library don't crash. I have tried this to convert my string guid to a guid, but no luck. query.Append("(PersonId = Guid(" + person.Key + ")"); query.Append("(PersonId = " + person.Key + ")"); I am using VS 2010 RTM and RIA Services as well as the Entity Framework 4. //This is the loop I use foreach (KeyValuePair<Guid, PersonDetails> person in personsDetails) { if ((person.Value as PersonDetails).IsExchangeChecked) { query.Append("(PersonId = Guid.Parse(" + person.Key + ")"); } } //Domain service call var query = this.ObjectContext.Persons.Where(DynamicExpression.ParseLambda<Person, bool>(persons)); Please help, and if you know of a better way of doing this I am open to suggestions.

    Read the article

  • How to remove a status message added by the seam security module?

    - by Joshua
    I would like to show a different status message, when a suspended user tries to login. If the user is active we return true from the authenticate method, if not we add a custom StatusMessage message mentioning that the "User X has been suspended". The underlying Identity authentication also fails and adds a StatusMessage. I tried removing the seam generated statusMessage with the following methods, but it doesn't seem to work and shows me 2 different status messages (my custom message, seam generated). What would be the issue here? StatusMessages statusMessages; statusMessages.clear() statusMessages.clearGlobalMessages() statusMessages.clearKeyedMessages(id) EDIT1: public boolean authenticate() { log.info("Authenticating {0}", identity.getCredentials().getUsername()); String username = identity.getCredentials().getUsername(); String password = identity.getCredentials().getPassword(); // return true if the authentication was // successful, false otherwise try { Query query = entityManager.createNamedQuery("user.by.login.id"); query.setParameter("loginId", username); // only active users can log in query.setParameter("status", "ACTIVE"); currentUser = (User)query.getSingleResult(); } catch (PersistenceException ignore) { // Provide a status message for the locked account statusMessages.clearGlobalMessages(); statusMessages.addFromResourceBundle( "login.account.locked", new Object[] { username }); return false; } IdentityManager identityManager = IdentityManager.instance(); if (!identityManager.authenticate(username, "password")) { return false; } else { log.info("Authenticated user {0} successfully", username); } }

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • how to count all distinct records in many-to-many relations in django ORM?

    - by marduk-pl
    hi, i have two models: class Project(models.Model): categories = models.ManyToManyField(Category) class Category(models.Model): name = models.CharField() now, i make some queryset: query = Project.objects.filter(id__in=[1,2,3,4]) and i like to get list of all distinct categories in this queryset with count of projects with refering to these categories - exactly i would like to get that results: category1 - 10 projects category2 - 5 projects that is opposite to this query: query2 = query.annotate(Count('categories')) what return me: project1 - 2categories project2 - 7categories how can i make it in django ORM?

    Read the article

  • How do i call bash script function using exec function by passing parameter in php?

    - by Stan
    I have created a bash script that install magento in a cpanel. but i have a problem regarding the exec function. $function_path = Mage::getBaseDir()."/media/installer/function.sh"; exec("$function_path $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email"); This the bash shell script function.sh #!/bin/bash magento_detail $dbhost $dbname $dbuser $dbpass $url $admin_username $admin_password $admin_email function magento_detail() { stty erase '^?' echo "To install Magento, you will need a blank database ready with a user assigned to it." echo -n "Do you have all of your database information" dbinfo = "y" echo $dbinfo if [ "$dbinfo" -eq 'y' ] then echo "Database Host (usually localhost) : $dbhost " echo "Database Name : $dbname " echo "Database User : $dbuser " echo "Database Password : $dbpass " echo "Store Url : $url " echo "Admin Username : $admin_username " echo "Admin Password : $admin_password " echo "Admin Email Address : $admin_email " echo -n "Include Sample Data? (y/n) " echo sample = "y" if [ "$sample" -eq "y" ]; then echo echo "Now installing Magento with sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz wget http://www.magentocommerce.com/downloads/assets/1.2.0/magento-sample-data-1.2.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz tar -zxvf magento-sample-data-1.2.0.tar.gz echo echo "Moving files..." echo mv magento-sample-data-1.2.0/media/* magento/media/ mv magento-sample-data-1.2.0/magento_sample_data_for_1.2.0.sql magento/data.sql mv magento/index.php magento/.htaccess ./$test1 echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Importing sample products..." echo mysql -h $dbhost -u $dbuser -p$dbpass $dbname < data.sql echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-sample-data-1.2.0/ rm -rf magento-1.5.1.0.tar.gz magento-sample-data-1.2.0.tar.gz data.sql rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt data.sql echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento" echo exit else echo "Now installing Magento without sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz echo echo "Moving files..." echo mv magento/* magento/.htaccess . echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-1.5.1.0.tar.gz rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento else part" exit fi else echo "Please setup a database first. Don't forget to assign a database user!" exit fi }` when i run this exec command,at that time it calls bash script function magento_installer() which contains arguments $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email. above arguments i'll pass in exec command to call magento_installer() function of bash script. so, is it right way of calling a bash script function? It directly goes to the last step of if condition and prints "Please setup a database first. Don't forget to assign a database user!". It cant enter it in if condition and directly goes to else condition. so please help me?

    Read the article

  • What causes this retainAll exception?

    - by Joren
    java.lang.UnsupportedOperationException: This operation is not supported on Query Results at org.datanucleus.store.query.AbstractQueryResult.contains(AbstractQueryResult.java:250) at java.util.AbstractCollection.retainAll(AbstractCollection.java:369) at namespace.MyServlet.doGet(MyServlet.java:101) I'm attempting to take one list I retrieved from a datastore query, and keep only the results which are also in a list I retrieved from a list of keys. Both my lists are populated as expected, but I can't seem to user retainAll on either one of them. // List<Data> listOne = new ArrayList(query.execute(theQuery)); // DatastoreService ds = DatastoreServiceFactory.getDatastoreService(); // List<Data> listTwo = new ArrayList(ds.get(keys).values()); // listOne.retainAll(listTwo);

    Read the article

  • How do you read system jobs in Dynamics CRM?

    - by Dan Crowther
    The CRM SDK says this is possible but the following code fails. Does anyone know why? var request = new RetrieveMultipleRequest(); var query = new QueryExpression(EntityName.asyncoperation.ToString()); query.ColumnSet = new AllColumns(); request.Query = query; var response = _connection.Execute(request); The error is: <error>\n <code>0x80040216</code> <description>An unexpected error occurred.</description> <type>Platform</type> </error> If I change the entity name to account, it works fine.

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • How do I use QXmlQuery properly? (Qt XQuery/XPath)

    - by Steven Jackson
    I'm using the following code to load in an XML file (actually an NZB): QXmlQuery query; query.bindVariable("path", QVariant(path)); query.setQuery("doc($path)/nzb/file/segments/segment/string()"); if(!query.isValid()) throw QString("Invalid query."); QStringList segments; if(!query.evaluateTo(&segments)) throw QString("Unable to evaluate..."); QString string; foreach(string, segments) qDebug() << "String: " << string; With the following input, it works as expected: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE nzb PUBLIC "-//newzBin//DTD NZB 1.0//EN" "http://www.newzbin.com/DTD/nzb/nzb-1.0.dtd"> <nzb> <file> <groups> <group>alt.binaries.cd.image</group> </groups> <segments> <segment>[email protected]</segment> </segments> </file> </nzb> However, with the following input no results are returned. This is how the input should be formatted, with attributes: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE nzb PUBLIC "-//newzBin//DTD NZB 1.0//EN" "http://www.newzbin.com/DTD/nzb/nzb-1.0.dtd"> <nzb xmlns="http://www.newzbin.com/DTD/2003/nzb"> <file poster="[email protected]" date="1225385180" subject="ubuntu-8.10-desktop-i386 - ubuntu-8.10-desktop-i386.par2 (1/1)"> <groups> <group>alt.binaries.cd.image</group> </groups> <segments> <segment bytes="66196" number="1">[email protected]</segment> <segment bytes="661967" number="1">[email protected]</segment> </segments> </file> </nzb> Please can someone tell me what I'm doing wrong?

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >