Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 143/1276 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Identifying Incompatibility Issues When Migrating SQL Server Database to Windows Azure

    In this article, Marcin Policht looks at migrating existing SQL Server databases to Windows Azure, starting with identifying obstacles associated with such migrations. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • On Demand Webinar: Extreme Database Performance meets its Backup and Recovery Match

    - by Cinzia Mascanzoni
    Oracle’s Sun ZFS Backup Appliance is a tested, validated and supported backup appliance specifically tuned for Oracle engineered system backup and recovery. The Sun ZFS Backup Appliance is easily integrated with Oracle engineered systems and provides an integrated high-performance backup solution that reduces backup windows by up to 7x and recovery time by up to 4x compared to competitor engineered systems backup solutions. Invite partners to register to attend this webcast to learn how the Sun ZFS Backup Appliance can provide superior performance, cost effectiveness, simplified management and reduced risk.

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The motivation was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • how to import other schema jars when using the scomp tool

    - by MikeJiang
    there is a huge amount of xml schemas for the business, some of them are common types like Money.xsd, Address.xsd, etc, while others are business specific like Customer.xsd, ShippingOrder.xsd, etc. So I decide to compile these schemas into 2 jars, one is commonbeans.jar, the other is businessbeans.jar. I've separated them into different folders. to build the commonbeans.jar is simple, just run "scomp -out commonbeans.jar ....\common*.xsd"; while run "scomp -out businessbeans.jar ....\business*.xsd" is a different story, there are errors say can't find those common types, and run "scomp -out businessbeans.jar ....\business*.xsd ....\business*.xsd" will blindly duplicate all the common types into the businessbeans.jar. so is there any way to link the commonbeans.jar when compile those busimess schemas, maybe something like "scomp -out businessbeans.jar ....\business*.xsd commonbeans.jar". I hope my poor english has expressed my issue!

    Read the article

  • How many address fields would you use for a UK database?

    - by Draemon
    Address records are probably used in most database, but I've seen a number of slightly different sets of fields used to store them. The number of fields seems to vary from 3-7, and sometimes all fields are simple labelled address1..addressN, other times given specific meaning (town, city, etc). This is UK specific, though I'm open to comments about the rest of the world too. Here you need the first line of the address (actually just the number) and the post code to identify the address - everything else is mostly an added bonus. I'm currently favouring: Address 1 Address 2 Address 3 Town County Post Code We could add Country if we ever needed it (unlikely). What do you think? Is this too little, too much?

    Read the article

  • Database and query to store and retreive friend list [migrated]

    - by amr Kamboj
    I am developing a module in website to save and retreive friend list. I am using Zend Framework and for DB handling I am using Doctrine(ORM). There are two models: 1) users that stores all the users 2) my_friends that stores the friend list (that is refference table with M:M relation of user) the structure of my_friends is following ...id..........user_id............friend_id........approved.... ...10.........20 ..................25...................1.......... ...10.........21 ..................25...................1.......... ...10.........22 ..................30...................1.......... ...10.........25 ..................30...................1.......... The Doctrine query to retreive friend list id follwing $friends = Doctrine_Query::create()->from('my_friends as mf') ->leftJoin('mf.users as friend') ->where("mf.user_id = 25") ->andWhere("mf.approved = 1"); Suppose I am viewing the user no.- 25. With this query I am only getting the user no.- 30. where as user no.- 25 is also approved friend of user no.- 20 and 21. Please guide me, what should be the query to find all friend and is there any need to change the DB structure.

    Read the article

  • Why does SQLite not bring back any results from my database

    - by tigermain
    This is my first SQLite based iPhone app and I am trying to get it to read a menu hierarchy from my database. The database appears to be registered fine as the compiled statement doesnt error (tried putting in valid table name to test) but for some reason sqlite3_step(compiledStmt) doesnt ever equal SQLITE_ROW as if to suggest there is no data in there; which there is. sqlite3 *database; menu = [[NSMutableArray alloc] init]; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char *sqlStmt = "SELECT * FROM Menu"; sqlite3_stmt *compiledStmt; if (sqlite3_prepare_v2(database, sqlStmt, -1, &compiledStmt, NULL) == SQLITE_OK) { while (sqlite3_step(compiledStmt) == SQLITE_ROW) { NSString *aTitle = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStmt, 1)]; MenuItem *menuItem = [[MenuItem alloc] init]; menuItem.title = aTitle; [menu addObject:menuItem]; [menuItem release]; } } else { NSLog(@"There is an error with the SQL Statement"); } sqlite3_finalize(compiledStmt); } sqlite3_close(database);

    Read the article

  • Is OpenStack suitable as a fault tolerant DB host?

    - by Jit B
    I am trying to design a fault tolerant DB cluster (schema does not matter) that would not require much maintenance. After looking at almost everything from MySQL to MongoDB to HBase I still find that no DB is easily scalable - Cassandra comes close but it has its own set of problems. So I was thinking what if I run something like MySQL or OrientDB on top of a large openstack VM. The VM would be fault tolerant by itself so I dont need to do it st DB level. Is it viable? Has it been done before? If not then what are the possible problems with this approach?

    Read the article

  • Data base preference for network based C# windows application [on hold]

    - by Sinoop Joy
    I'm planning to develop a C# widows based application for an academy. The academy will have different instances of application running in different machines. The database should have shared access. All the application instances can do update, delete or insert. I've not done any network based application. Anybody can give any useful link to where to start with ? Which database would give max performance with all required features i said for this scenario ?

    Read the article

  • Most efficient way to update a MySQL Database on a Linux host with that of an ASP.Net Form on Window

    - by NJTechGuy
    My kind webhost (1and1) royally asked me to go elsewhere to do something like this. I have 2 sites. One of them was developed by a .Net programmer. Now I am contracted to implement a PHP site and fetch data from the .Net site. There is an ASP.Net form that a customer fills and when they hit submit, the data gets stored in SQL Server DB. How do I also store the same data in MySQL parallelly? I cannot directly use some database connectors with ASP.Net since MySQL connectivity is not supported on 1and1 Windows hosting (biz account, no less!). What I thought of is to publish an RSS feed of entries in ASP.Net site and routinely scrape that data into MySQL on Linux host. It is an overkill, I know. Not efficient. I thought I would pick the best brains on SOF to get a different, efficient opinion. Thanks in advance guys...

    Read the article

  • Storing hierarchical template into a database

    - by pduersteler
    If this title is ambiguous, feel free to change it, I don't know how to put this in a one-liner. Example: Let's assume you have a html template which contains some custom tags, like <text_field />. We now create a page based on a template containing more of those custom tags. When a user wants to edit the page, he sees a text field. he can input things and save it. This looks fairly easy to set up. You either have something like a template_positions table which stores the content of those fields. Case: I now have a bit of a blockade keeping things as simple as possible. Assume you have the same tag given in your example, and additionally, <layout> and <repeat> tags. Here's an example how they should be used: <repeat> <layout name="image-left"> <image /> <text_field /> </layout> <layout name="image-right"> <text_field /> <image /> </layout> </repeat> We now have a block which can be repeated, obviously. This means: when creting/editing a page containing such a template block, I can choose between a layout image-left and image-right which then gets inserted as content element (where content for <image /> and <text_field /> gets stored). And because this is inside a <repeat>, content elements from the given layouts can be inserted multiple times. How do you store this? Simply said, this could be stored with the same setup I've wrote in the example above, I just need to add a parent_id or something similiar to maintain a hierarchy. but I think I am missing something. At least the relation between an inserted content element and the origin/insertion point is missing. And what happens when I update the template file? Do I have to give every custom tag that acts as editable part of a template an identifier that matches an identifier in the template to substitue them correctly? Or can you think of a clean solution that might be better?

    Read the article

  • Restoring Sharepoint content database

    - by jude
    Hi, My WSS_Content database had got corrupt. And my pc was infected by virus. I had no backup of my WSS_Content database. So, I copied the corrupt database to a separete disk, formatted and reinstalled Sharepoint, with SQL Server 2005 as before (I'm using sharepoint 2007 ). I used Sytools Sharepoint Recovery tool, that i found on the net, which helped me restore my corrupt WSS_Content database. Now i want to set this content database as my "The content database" for my newly installed sharepoint. I tried the steps that i found in the link :- http://www.stationcomputing.com/scblogspace/Lists/Posts/Post.aspx?ID=40 I get stuck at step 3. Can anybody help me. I am really in a big mess. Would appreciate any help. Thanks, Jude Aloysius

    Read the article

  • What xsd will let an element have itself as a sub element infinitely?

    - by David Basarab
    How can I create an xsd to give me this type of xml structure that can go on infinitely? <?xml version="1.0" encoding="utf-8" ?> <SampleXml> <Items> <Item name="SomeName" type="string"> This would be the value </Item> <Item name="SecondName" type="string"> This is the next string </Item> <Item name="AnotherName" type="list"> <Items> <Item name="SubName" type="string"> A string in a sub list </Item> <Item name="SubSubName" type="list"> <Items> <Item name="HowDoI" type="string"> How do I keep this going infinately? </Item> </Items> </Item> </Items> </Item> </Items> </SampleXml> The only solution I have found has been to just repeat in the xsd as many times as I am willing to copy. Like below. <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="SampleXml"> <xs:complexType> <xs:sequence> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="Item"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="Item"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element name="Item"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Correct way to give users access to additional schemas in Oracle

    - by Jacob
    I have two users Bob and Alice in Oracle, both created by running the following commands as sysdba from sqlplus: create user $blah identified by $password; grant resource, connect, create view to $blah; I want Bob to have complete access to Alice's schema (that is, all tables), but I'm not sure what grant to run, and whether to run it as sysdba or as Alice. Happy to hear about any good pointers to reference material as well -- don't seem to be able to get a good answer to this from either the Internet or "Oracle Database 10g The Complete Reference", which is sitting on my desk.

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Best database setup for one click games

    - by ewizard
    I am building a one click game website/mobile app, and I am debating between using MySQL and MongoDB for the backend. The way I have been exploring it is with a NodeJS/Express/Angular/Passport/MongoDB stack - I have also implemented Socket.io. I have gotten to the point where I am sending data from the flash game to the server (NodeJS). The only data that needs to be sent is basic user information, the players score at the end of each game, and some x,y positions for each players game (for anti-cheating). It seems like MySQL would work fine, but as I am already using MongoDB - are there any major drawbacks to continuing to work with MongoDB on this project?

    Read the article

  • Connect to a MySQL database and count the number of rows.

    - by Hugo
    Hi there! I need to connect to a MySQL database and then show the number of rows. This is what I've got so far; <?php include "connect.php"; db_connect(); $result = mysql_query("SELECT * FROM hacker"); $num_rows = mysql_num_rows($result); echo $num_rows; ?> When I use that code I end up with this error; Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in C:\Documents and Settings\username\Desktop\xammp\htdocs\news2\results.php on line 10 Thanks in advance :D

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

  • Database table design vs. ease of use.

    - by Gastoni
    I have a table with 3 fields: color, fruit, date. I can pick 1 fruit and 1 color, but I can do this only once each day. examples: red, apple, monday red, mango, monday blue, apple, monday blue, mango, monday red, apple, tuesday The two ways in which I could build the table are: 1.- To have color, fruit and date be a composite primary key (PK). This makes it easy to insert data into the table because all the validation needed is done by the database. PK color PK fruit PK date 2.- Have and id column set as PK and then all the other fields. Many say thats the way it should be, because composite PKs are evil. For example, CakePHP does no support them. PK id color fruit date Both have advantages. Which would be the 'better' approach?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >