Search Results

Search found 68200 results on 2728 pages for 'web database'.

Page 191/2728 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Hibernate design to speed up querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT, DOUBLE)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID, DISTANCE 1 1 2 0.383657 2 1 17 0.173201 3 1 63 0.258781 4 1 64 0.013726 5 1 65 0.459829 6 1 95 0.458769 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "STOPID") private int stopID; @Column(name = "ROUTEID", nullable = false) private int routeID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "COORDINATEID", nullable = false) private Coordinate coordinate; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private int CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private int CoordinateID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "FROMCOORDID", nullable = false) private Coordinate fromCoordID; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "TOCOORDID", nullable = false) private Coordinate toCoordID; @Column(name = "DISTANCE", nullable = false) private double distance; }

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • mailto fails in IE where there is a long body text. Is there any way to resolve this?

    - by MedicineMan
    I am having a problem using Internet Explorer 8 (IE8) to open mailto links with long messages. After the user clicks on the link, IE changes to an about:blank page and never completes the call to outlook to create an email Here's an example: <a href="mailto:[email protected]?subject=123456789&amp;body=111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111">mailto fails in IE8</a> If I shorten the list of 1's, the email is generated and can be sent. Is this a known IE issue? What are the limitations?

    Read the article

  • Web development - relative URLs without duplicating files

    - by eshriek
    I have a site with index.php in the root folder, images in /img , and overview.php in /content . I have a sidebar.php file that is included in both index.php and overview.php . How should I refer to /img/image.gif if I include a link in each file? The location of image.gif changes relative to the location of the file that references it. Using /img/image.gif in sidebar.php will work in index.php, but it fails for the file located at /content/overview.php. The only solution that I can see is to either include a seperate sidebar.php in each subdirectory, or include an /img directory in every sub-directory. The best suggestion that I can find is to use the <base html tag as suggested here: Change relative link paths for included content in PHP However, in the same link, SamGoody suggests that the <base tag "is no longer properly supported in Internet Explorer, since version 7." I'd like some insight on the matter before committing to a course of action. Thanks. EDIT: I am using the wrong approach below with "../" Example- root/index.php: ... <link rel="stylesheet" type="text/css" href="style.css" /> <title>title</title> </head> <body> <?php include('include/header.php'); ?> <?php include('include/menu.php'); ?> ... root/include/header.php: ... <div id="header"> <span class="fl"><img src="img/dun1.png"/></span><span class="fr"><img src="img/dun2.png"/></span> ... root/content/overview.php: ... <link rel="stylesheet" type="text/css" href="../style.css" media="screen" /> <title>Overview</title> </head> <body> <?php include('../include/header.php'); ?> <?php include('../include/menu.php'); ?> ...

    Read the article

  • Are these jobs for developer or designers or for client itself?

    - by jitendra
    Spell checking grammar checking Descriptive alt text for big chart , graph images, technical images To write Table summary and caption Descriptive Link text Color Contrast checking Deciding in content what should be H2 ,H3, H4... and what should be <strong> or <span class="boldtext"> Meta Description and keywords for each pages Image compression To decide Filenames for images,PDf etc To decide Page's <title> for each page

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

  • Asset tracking in "real-time" - how best to display in browser?

    - by mawg
    I am developing an asset tracking system, standard LAMP, and now am wondering how best to present the data to the user in the browser. I expect to track and most a few thousand items, and to refresh them every second or so. I want to draw a floorplan or map of the area and represent the assets the assets symbolically on that (with different symbols for different classes of assets). Additionally, the user should be able to click on an asset to interact with it, and search for a particular asset and centre the screen on it, draw a circle round it, etc http://graphite.wikidot.com/ Looks good - is there any alternative? At its simplest, I suppose I could just generate a JPEG and display it, using CSS to let me know if/where a user clicks ... but what's the "best" way to do it?

    Read the article

  • postgresql syntax while exists loop

    - by veilig
    I'm working at function from Joe Celkos book - Trees and Hierarchies in SQL for Smarties I'm trying to delete a subtree from an adjacency list but part my function is not working yet. WHILE EXISTS –– mark leaf nodes (SELECT * FROM OrgChart WHERE boss_emp_nbr = -99999 AND emp_nbr > -99999) LOOP –– get list of next level subordinates DELETE FROM WorkingTable; INSERT INTO WorkingTable SELECT emp_nbr FROM OrgChart WHERE boss_emp_nbr = -99999; –– mark next level of subordinates UPDATE OrgChart SET emp_nbr = -99999 WHERE boss_emp_nbr IN (SELECT emp_nbr FROM WorkingTable); END LOOP; my question: is the WHILE EXISTS correct for use w/ postgresql? I appear to be stumbling and getting caught in an infinite loop in this part. Perhaps there is a more correct syntax I am unaware of.

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • How to show unread subforums?

    - by bilygates
    I have written a simple forum in PHP using PostgreSQL. The forum consists of a number of subforums (or categories, if you like) that contain topics. I have a table that stores when was the last time a user visited a topic. It's something like this: user_id, topic_id, timestamp. I can easily determine what topics should be marked as unread by comparing the timestamp of the last topic reply with the timestamp of the last user visit. My question is: how do I efficiently determine what subforums (categories) should be marked as unread? All I've come up with is this: every time a user visits a topic, update the visit timestamp and check if all the topics from the current subforum are read or unread. If they are all read, mark the subforum as read for the user. Else, mark it as unread. But I think there must be another way. Thank you in advance.

    Read the article

  • Data Modeling Help - Do I add another table, change existing table's usage, or something else?

    - by StackOverflowNewbie
    Assume I have the following tables and relationships: Person - Id (PK) - Name A Person can have 0 or more pets: Pet - Id (PK) - PersonId (FK) - Name A person can have 0 or more attributes (e.g. age, height, weight): PersonAttribute _ Id (PK) - PersonId (FK) - Name - Value PROBLEM: I need to represent pet attributes, too. As it turns out, these pet attributes are, in most cases, identical to the attributes of a person (e.g. a pet can have an age, height, and weight too). How do I represent pet attributes? Do I create a PetAttribute table? PetAttribute Id (PK) PetId (FK) Name Value Do I change PersonAttribute to GenericAttribute and have 2 foreign keys in it - one connecting to Person, the other connecting to Pet? GenericAttribute Id (PK) PersonId (FK) PetId (FK) Name Value NOTE: if PersonId is set, then PetId is not set. If PetId is set, PersonId is not set. Do something else?

    Read the article

  • how to design a schema where the columns of a table are not fixed

    - by hIpPy
    I am trying to design a schema where the columns of a table are not fixed. Ex: I have an Employee table where the columns of the table are not fixed and vary (attributes of Employee are not fixed and vary). Nullable columns in the Employee table itself i.e. no normalization Instead of adding nullable columns, separate those columns out in their individual tables ex: if Address is a column to be added then create table Address[EmployeeId, AddressValue]. Create tables ExtensionColumnName [EmployeeId, ColumnName] and ExtensionColumnValue [EmployeeId, ColumnValue]. ExtensionColumnName would have ColumnName as "Address" and ExtensionColumnValue would have ColumnValue as address value. Employee table EmployeeId Name ExtensionColumnName table ColumnNameId EmployeeId ColumnName ExtensionColumnValue table EmployeeId ColumnNameId ColumnValue There is a drawback is the first two ways as the schema changes with every new attribute. Note that adding a new attribute is frequent. I am not sure if this is the good or bad design. If someone had a similar decision to make, please give an insight on things like foreign keys / data integrity, indexing, performance, reporting etc.

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • Good web hosting for ASP.NET MVC 1.0 app

    - by magellings
    I'm looking for hosting for an ASP.NET MVC 1.0 app. I've narrowed down with research to either asphostportal, asphostcentral, godaddy, or 1&1. I've ruled out crystaltech and softsyshosting due to price with better plans. Will be running a small e-commerce site written with ASP.NET MVC 1.0 and want to be sure it will work, as well as looking for cheapest price with best value in regards to disk space/bandwidth. And bandwidth is basically how much data can be sent from your site per month right? Any opinions appreciated as I'm finding this tough to narrow down. I know you can bin deploy MVC but heard full trust mode is required as well as some routing rules in IIS. 1&1 says they can't enable full trust. This is what I was looking at: name data(disk space/bandwidth) price MVCenabled crystal tech 500MB/50GB 7.95 + 7.95 setup 2000MB/200GB 16.95 softsyshosting 500MB/5GB 3.50 + 12/year domain 1000MB/10GB 5.84 3000MB/30GB 13.33 asphostportal 5GB/50GB 5.75 + 8.99/year yes 10GB/100GB 10.25 asphostcentral 2GB/15GB 4.99 yes 3GB/30GB 7.99/month domain free 5GB/40GB 11.99 godaddy 10GB/300GB 10.69 + 4.74/month 150GB/1500GB 6.99/month 1&1 10GB/unlimited 3.99 + free domain 150GB/unlimited 6.99 1&1 seems to be best value if MVC app will work. I'm a bit confused on bandwidth being unlimited. May seem like a good thing, but what if one website on the server is a resource hog because of this?

    Read the article

  • Stripes link event triggering validation that is incorrect.

    - by Davoink
    I have stripes:link tag in a jsp with an event attribute: <stripes:link href="${actionBean.context.currentStage.stripesForwardAction}" addSourcePage="true" event="showTab2Link"> This triggers the validation to trigger on nested properties: @ValidateNestedProperties({ @Validate(field="county", required=true, minlength=2, maxlength=2, mask="\\d\\d"), @Validate(field="parish", required=true, minlength=3, maxlength=3, mask="\\d\\d\\d"), @Validate(field="holding", required=true, minlength=4, maxlength=4, mask="\\d\\d\\d\\d") }) However this would been fine if the actual values it is validation are not present, but they are present within the html and when debugging the bean. Why would the stripes:link trigger this? If I change it to an stripes:submit then it is fine. thanks, Dave

    Read the article

  • Modularizing web applications

    - by Matt
    Hey all, I was wondering how big companies tend to modularize components on their page. Facebook is a good example: There's a team working on Search that has its own CSS, javascript, html, etc.. There's a team working on the news feed that has its own CSS, javascript, html, etc... ... And the list goes on They cannot all be aware of what everyone is naming their div tags and whatnot, so what's the controller(?) doing to hook all these components in on the final page?? Note: This doesn't just apply to facebook - any company that has separate teams working on separate components has some logic that helps them out. EDIT: Thanks all for the responses, unfortunately I still haven't really found what I'm looking for - when you check out the source code (granted its minified), the divs have UIDs, my guess is that there is a compilation process that runs through and makes each of the components unique, renaming divs and css rules.. any ideas? EDIT 2: Thanks all for contributing your thoughts - the bounty went to the highest upvoted answer. The question was designed to be vague- I think it led to a really interesting discussion. As I improve my build process, I will contribute my own thoughts and experiences. Thanks all! Matt Mueller

    Read the article

  • InnoDB or MyISAM - Why not both?

    - by Skoder
    Hey. I'm new to databases, and I've read various threads about which is better between InnoDB and MyISAM. It seems that the debates are to use or the other. Is it not possible to use both, depending on the table? What would be the disadvantages in doing this? As far as I can tell, the engine can be set during the CREATE TABLE command. Therefore, certain tables which are often read can be set to MyISAM, but tables that need transaction support can use InnoDB. I'm sure there must be a problem, otherwise this would be the ultimate answer :).

    Read the article

  • What mutex/locking/waiting mechanism to use when writing a Chat application with Tornado Web Framewo

    - by user272973
    We're implementing a Chat server using Tornado. The premise is simple, a user makes open an HTTP ajax connection to the Tornado server, and the Tornado server answers only when a new message appears in the chat-room. Whenever the connection closes, regardless if a new message came in or an error/timeout occurred, the client reopens the connection. Looking at Tornado, the question arises of what library can we use to allow us to have these calls wait on some central object that would signal them - A_NEW_MESSAGE_HAS_ARRIVED_ITS_TIME_TO_SEND_BACK_SOME_DATA. To describe this in Win32 terms, each async call would be represented as a thread that would be hanging on a WaitForSingleObject(...) on some central Mutex/Event/etc. We will be operating in a standard Python environment (Tornado), is there something built-in we can use, do we need an external library/server, is there something Tornado recommends? Thanks

    Read the article

  • Replication - User defined table type not propogating to subscriber

    - by Aamod Thakur
    I created a User defined table type named tvp_Shipment with two columns (id and name) . generated a snapshot and the User defined table type was properly propogated to all the subscribers. I was using this tvp in a stored procedure and everything worked fine. Then I wanted to add one more column created_date to this table valued parameter.I dropped the stored procedure (from replication too) and also i dropped and recreated the User defined table type with 3 columns and then recreated the stored procedure and enabled it for publication When i generate a new snapshot, the changes in user defined table type are not propogated to the subscriber. The newly added column was not added to the subscription. the Error messages: The schema script 'usp_InsertAirSa95c0e23_218.sch' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001) Get help: http://help/MSSQL_REPL-2147201001 Invalid column name 'created_date'. (Source: MSSQLServer, Error number: 207) Get help: http://help/207

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >