Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 349/410 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Change Address/Port of WSDL EndPointAddress at runtime?

    - by Pretzel
    So I currently have 3 WSDLs added as Service References in my solution. They look like this in my app.config file (I removed the "bindings" field, because it's uninteresting): <system.serviceModel> <client> <endpoint address="http://localhost:8080/query-service/jse" binding="basicHttpBinding" bindingConfiguration="QueryBinding" contract="QueryService.Query" name="QueryPort" /> <endpoint address="http://localhost:8080/platetype-service/jse" binding="basicHttpBinding" bindingConfiguration="PlateTypeBinding" contract="PlateTypeService.PlateType" name="PlateTypePort" /> <endpoint address="http://localhost:8080/dataimport-service/jse" binding="basicHttpBinding" bindingConfiguration="DataImportBinding" contract="DataImportService.DataImport" name="DataImportPort" /> </client> </system.serviceModel> When I utilize a WSDL, it looks something like this: using (DataService.DataClient dClient = new DataService.DataClient()) { DataService.importTask impt = new DataService.importTask(); impt.String_1 = "someData"; DataService.importResponse imptr = dClient.importTask(impt); } In the "using" statement, when instantiating the DataClient object, I have 5 constructors available to me. In this scenario, I use the default constructor: new DataService.DataClient() which uses the built-in Endpoint Address string, which is fine and good. But I want the user of the application to have the option to change this value. 1) What's the best/easiest way of programatically obtaining this string? 2) Then, once I've allowed the user to edit and test the value, where should I store it? I'd prefer having it be stored in a place (like app.config or equivalent) so that there is no need for checking whether the value exists or not and whether I should be using an alternate constructor. (Looking to keep my code tight, ya know?) Any ideas? Suggestions?

    Read the article

  • How to count the Chinese word in a file using regex in perl?

    - by Ivan
    I tried following perl code to count the Chinese word of a file, it seems working but not get the right thing. Any help is greatly appreciated. The Error message is Use of uninitialized value $valid in concatenation (.) or string at word_counting.pl line 21, <FILE> line 21. Total things = 125, valid words = which seems to me the problem is the file format. The "total thing" is 125 that is the string number (125 lines). The strangest part is my console displayed all the individual Chinese words correctly without any problem. The utf-8 pragma is installed. #!/usr/bin/perl -w use strict; use utf8; use Encode qw(encode); use Encode::HanExtra; my $input_file = "sample_file.txt"; my ($total, $valid); my %count; open (FILE, "< $input_file") or die "Can't open $input_file: $!"; while (<FILE>) { foreach (split) { #break $_ into words, assign each to $_ in turn $total++; next if /\W|^\d+/; #strange words skip the remainder of the loop $valid++; $count{$_}++; # count each separate word stored in a hash ## next comes here ## } } print "Total things = $total, valid words = $valid\n"; foreach my $word (sort keys %count) { print "$word \t was seen \t $count{$word} \t times.\n"; } ##---Data---- sample_file.txt ??????,???????,????.??????.????:"?????????????,??????,????????.????????,?????????, ???????????.????????,???????????,??????,??????.???:`??,???????????.'?????, ??????????."??????,??????.????.???, ????????????,????,??????,?????????,??????????????. ????????,??????,???????????,????????,????????.????,????,???????, ??????????,??????,????????.??????.

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • PHP and MySQL validating problem

    - by IPADvsSLATE
    I'm trying to check if a color is already entered into the database if it is the color should not be entered and stored into the database and the following error code <p>This color has already been entered!</p> should be displayed. But for some reason I cant get this to work, can someone please help me? The color names are entered into $_POST['color'] which is an array entered by the user. Here is the html code that collects the colors. <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> <input type="text" name="color[]" /> Here is the PHP & MySQL code. for($i=0; $i < count($_POST['color']); $i++) { $color = "'" . $_POST['color'][$i] . "'"; } $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT * FROM colors WHERE color = '$color' AND user = '$user_id' "); if(mysqli_num_rows($dbc) == TRUE) { echo '<p>This color has already been entered!</p>'; } else if(mysqli_num_rows($dbc) == 0) { // enter the color into the database }

    Read the article

  • Providing custom database functionality to custom asp.net membership provider

    - by IrfanRaza
    Hello friends, I am creating custom membership provider for my asp.net application. I have also created a separate class "DBConnect" that provides database functionality such as Executing SQL statement, Executing SPs, Executing SPs or Query and returning SqlDataReader and so on... I have created instance of DBConnect class within Session_Start of Global.asax and stored to a session. Later using a static class I am providing the database functionality throughout the application using the same single session. In short I am providing a single point for all database operations from any asp.net page. I know that i can write my own code to connect/disconnect database and execute SPs within from the methods i need to override. Please look at the code below - public class SGI_MembershipProvider : MembershipProvider { ...... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } ..... //Database connectivity and code execution to change password. } .... } MY PROBLEM - Now what i need is to execute the database part within all these overriden methods from the same database point as described on the top. That is i have to pass the instance of DBConnect existing in the session to this class, so that i can access the methods. Could anyone provide solution on this. There might be some better techniques i am not aware of that. The approach i am using might be wrong. Your suggessions are always welcome. Thanks for sharing your valuable time.

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • What is happening in Crockford's object creation technique?

    - by Chris Noe
    There are only 3 lines of code, and yet I'm having trouble fully grasping this: Object.create = function (o) { function F() {} F.prototype = o; return new F(); }; newObject = Object.create(oldObject); (from Prototypal Inheritance) 1) Object.create() starts out by creating an empty function called F. I'm thinking that a function is a kind of object. Where is this F object being stored? Globally I guess. 2) Next our oldObject, passed in as o, becomes the prototype of function F. Function (i.e., object) F now "inherits" from our oldObject, in the sense that name resolution will route through it. Good, but I'm curious what the default prototype is for an object, Object? Is that also true for a function-object? 3) Finally, F is instantiated and returned, becoming our newObject. Is the "new" operation strictly necessary here? Doesn't F already provide what we need, or is there a critical difference between function-objects and non-function-objects? Clearly it won't be possible to have a constructor function using this technique. What happens the next time Object.create() is called? Is global function F overwritten? Surely it is not reused, because that would alter previously configured objects. And what happens if multiple threads call Object.create(), is there any sort of synchronization to prevent race conditions on F?

    Read the article

  • Actually XSLT Lookup (Store variables during loop and use in it another template)

    - by krisvandenbergh
    Is there a way to store a variable/param during a for-each loop in a sort of array, and use it in another template, namely <xsl:template match="Foundation.Core.Classifier.feature">. All the classname values that appear during the for-each should be stored. How would you implement that in XSLT? Here's my current code. <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <xsl:param name="classname"> <xsl:value-of select="Foundation.Core.ModelElement.name"/> </xsl:param> </xsl:for-each> <xsl:apply-templates select="Foundation.Core.Classifier.feature" /> </xsl:for-each> Here's the template in which the classname parameters should be used. <xsl:template match="Foundation.Core.Classifier.feature"> <xsl:for-each select="Foundation.Core.Attribute"> <owl:DatatypeProperty rdf:ID="{Foundation.Core.ModelElement.name}"> <rdfs:domain rdf:resource="$classname" /> </owl:DatatypeProperty> </xsl:for-each> </xsl:template> The input file can be found at http://krisvandenbergh.be/uml_pricing.xml

    Read the article

  • Reorganizing development environment for single developer/small shop

    - by Matthew
    I have been developing for my company for approximately three years. We serve up a web portal using Microsoft .NET and MS SQL Server on DotNetNuke. I am going to leave my job full time at the end of April. I am leaving on good terms, and I really care about this company and the state of the web project. Because I haven't worked in a team environment in a long time, I have probably lost touch with what 'real' setups look like. When I leave, I predict the company will either find another developer to take over, or at least have developers work on a contractual basis. Because I have not worked with other developers, I am very concerned with leaving the company (and the developer they hire) with a jumbled mess. I'd like to believe I am a good developer and everything makes sense, but I have no way to tell. My question, is how do I set up the development environment, so the company and the next developer will have little trouble getting started? What would you as a developer like in place before working on a project you've never worked on? Here's some relevant information: There is a development server onsite and a production server offsite in a data center . There is a server where backups and source code (Sourcegear Vault) are stored. There is no formal documentation but there are comments in the code. The company budget is tight so free suggestions will help the best. I will be around after the end of April on a consulting basis so I can ask simple questions but I will not be available full time to train someone

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • For a set of sql-queries, how do you determine which result-set contains a certain row?

    - by ManBugra
    I have a set of sql - queries: List<String> queries = ... queries[0] = "select id from person where ..."; ... queries[8756] = "select id from person where ..."; Each query selects rows from the same table 'person'. The only difference is the where-clause. Table 'person' looks like this: id | name | ... many other columns How can i determine which queries will contain a certain person in their subset? For example: List<Integer> matchingQueries = magicMethod(queries, [23,45]); The list obtained by 'magicMethod' filters all sql queries present in the list 'queries' (defined above) and returns only those that contain either the person with id 23 OR a person with id 45. Why i need it: I am dealing with an application that contains products and categories where the categories are sql queries that define which products belong to them (queries stored in a table also). Now i have a requirement where an admin has to see all categories an item belongs to immediately after the item was created. Btw, over 8.000 categories defined (so far, more to come). language and db: java && postgreSQL Thanks,

    Read the article

  • How can I get type information at runtime from a DMP file in a Windbg extension?

    - by pj4533
    This is related to my previous question, regarding pulling objects from a dmp file. As I mentioned in the previous question, I can successfully pull object out of the dmp file by creating wrapper 'remote' objects. I have implemented several of these so far, and it seems to be working well. However I have run into a snag. In one case, a pointer is stored in a class, say of type 'SomeBaseClass', but that object is actually of the type 'SomeDerivedClass' which derives from 'SomeBaseClass'. For example it would be something like this: MyApplication!SomeObject +0x000 field1 : Ptr32 SomeBaseClass +0x004 field2 : Ptr32 SomeOtherClass +0x008 field3 : Ptr32 SomeOtherClass I need someway to find out what the ACTUAL type of 'field1' is. To be more specific, using example addresses: MyApplication!SomeObject +0x000 field1 : 0cae2e24 SomeBaseClass +0x004 field2 : 0x262c8d3c SomeOtherClass +0x008 field3 : 0x262c8d3c SomeOtherClass 0:000> dt SomeBaseClass 0cae2e24 MyApplication!SomeBaseClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 0:000> dt SomeDerivedClass 0cae2e24 MyApplication!SomeDerivedClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 +0x040 derivedfield1 : 357 +0x044 derivedfield2 : timecode_t When I am in WinDbg, I can do this: dt 0x02de89e4 And it will show the type: 0:000> dt 0x02de89e4 SomeDerivedClass::`vftable' Symbol not found. But how do get that inside an extension? Can I use SearchMemory() to look for 'SomeDerivedClass::`vftable'? If you follow my other question, I need this type information so I know what type of wrapper remote classes to create. I figure it might end up being some sort of case-statement, where I have to match a string to a type? I am ok with that, but I still don't know where I can get that string that represents the type of the object in question (ie SomeObject-field1 in the above example).

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • Java w/ SQL Server Express 2008 - Index out of range exception

    - by BS_C3
    Hi! I created a stored procedure in a sql express 2008 and I'm getting the following error when calling the procedure from a Java method: Index 36 is out of range. com.microsoft.sqlserver.jdbc.SQLServerException:Index 36 is out of range. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setterGetParam(SQLServerPreparedStatement.java:698) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.setValue(SQLServerPreparedStatement.java:707) at com.microsoft.sqlserver.jdbc.SQLServerCallableStatement.setString(SQLServerCallableStatement.java:1504) at fr.alti.ccm.middleware.Reporting.initReporting(Reporting.java:227) at fr.alti.ccm.middleware.Reporting.main(Reporting.java:396) I cannot figure out where it is coming from... _< Any help would be appreciated. Regards, BS_C3 Here's some source code: public ArrayList<ReportingTableMapping> initReporting( String division, String shop, String startDate, String endDate) { ArrayList<ReportingTableMapping> rTable = new ArrayList<ReportingTableMapping>(); ManagerDB db = new ManagerDB(); CallableStatement callStmt = null; ResultSet rs = null; try { callStmt = db.getConnexion().prepareCall("{call getInfoReporting(?,...,?)}"); callStmt.setString("CODE_DIVISION", division); . . . callStmt.setString("cancelled", " "); rs = callStmt.executeQuery(); while (rs.next()) { ReportingTableMapping rtm = new ReportingTableMapping( rs.getString("werks"), ... ); rTable.add(rtm); } rs.close(); callStmt.close(); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); } finally { if (rs != null) try { rs.close(); } catch (Exception e) { } if (callStmt != null) try { callStmt.close(); } catch (Exception e) { } if (db.getConnexion() != null) try { db.getConnexion().close(); } catch (Exception e) { } } return rTable; }

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • WPF and LINQ/SQL - how and where to keep track of changes?

    - by Groky
    I have a WPF application built using the MVVM pattern: My Models come from LINQ to SQL. I use the Repository Pattern to abstract away the DataContext. My ViewModels have a reference to a Model. Setting a property on the ViewModel causes that value to be written through to the Model. As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext. However, in this question I read: The guidelines from the MSDN documentation on the DataContext class are what I would recommend following: In general, a DataContext instance is designed to last for one "unit of work" however your application defines that term. A DataContext is lightweight and is not expensive to create. A typical LINQ to SQL application creates DataContext instances at method scope or as a member of short-lived classes that represent a logical set of related database operations. How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?

    Read the article

  • unable to return 'true' value in C function

    - by Byron
    If Im trying to check an input 5 byte array (p) against a 5 byte array stored in flash (data), using the following function (e2CheckPINoverride), to simply return either a true or false value. But it seems, no matter what I try, it only returns as 'false'. I call the function here: if (e2CheckPINoverride(pinEntry) == 1){ PTDD_PTDD1 = 1; } else{ PTDD_PTDD1 = 0; } Here is the function: BYTE e2CheckPINoverride(BYTE *p) { BYTE i; BYTE data[5]; if(e2Read(E2_ENABLECODE, data, 5)) { if(data[0] != p[0]) return FALSE; if(data[1] != p[1]) return FALSE; if(data[2] != p[2]) return FALSE; if(data[3] != p[3]) return FALSE; if(data[4] != p[4]) return FALSE; } return TRUE; } I have already assigned true and false in the defines.h file: #ifndef TRUE #define TRUE ((UCHAR)1) #endif #ifndef FALSE #define FALSE ((UCHAR)0) #endif and where typedef unsigned char UCHAR; when i step through the code, it performs all the checks correctly, it passes in the correct value, compares it correctly and then breaks at the correct point, but is unable to process the return value of true? please help?

    Read the article

  • Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

    - by Ankur
    I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments. I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject -- predicate -- object (where the subjects are from the objects table). I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible? I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing. I am using Java so a JAVA API is preferred. It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how. I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • mysql query using global variables

    - by Carlos
    I am trying run a query to active the users account. I am not sure if I am having problem with the query itself or if there's something else that I dont know about. here is the code: if($_SESSION['lastid']&&$_SESSION['random']) { $check= mysql_query('SELECT * FROM members WHERE id= "$_SESSION[lastid]" AND random = " $_SESSION[random]"'); $checknum = mysql_num_rows($check); //$checknum = mysql_query($check) or die("Error: ". mysql_error(). " with query ". $check); if($checknum != 0) // run query to activate the account { $acti= mysql_query('UPDATE members SET activation = "1" WHERE id= "$_SESSION[lastid]"'); die('Your account has been activated. You may now log in!'); }else{ echo('Invalid id or activation code.') . ' lastid: ' .$_SESSION['lastid'] . ' random: ' .$_SESSION['random'] ; // die ('Invalid id or activation code.'); } }else{ die('Could not either find id or random number!'); } this is the warning I am getting from mysql: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /hermes/bosweb26b/b2501/servername/folder/file.php on line 30 but when I echo the variables out, I get the same values that are stored in the database.... Invalid id or activation code. lastid: 2 and random: 36308075 could someone please give me a hint? thank you.

    Read the article

  • how to use iterator in c++?

    - by tsubasa
    I'm trying to calculate distance between 2 points. The 2 points I stored in a vector in c++: (0,0) and (1,1). I'm supposed to get results as 0 1.4 1.4 0 but the actual result that I got is 0 1 -1 0 I think there's something wrong with the way I use iterator in vector. Could somebody help? I posted the code below. typedef struct point { float x; float y; } point; float distance(point *p1, point *p2) { return sqrt((p1->x - p2->x)*(p1->x - p2->x) + (p1->y - p2->y)*(p1->y - p2->y)); } int main() { vector <point> po; point p1; p1.x=0; p1.y=0; point p2; p2.x=1; p2.y=1; po.push_back(p1); po.push_back(p2); vector <point>::iterator ii; vector <point>::iterator jj; for (ii=po.begin(); ii!=po.end(); ii++) { for (jj=po.begin(); jj!=po.end(); jj++) { cout<<distance(ii,jj)<<" "; } } return 0; }

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >