Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 661/1300 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

  • How to inplement a nightly process in .NET?

    - by Abe Miessler
    I have a set of tasks that I would like to execute every night. These tasks include querying a database, moving and renaming some images and lastly updating a database table. My first thought had been to create a SQL Server job and use xp_cmdshell to move the files but after a bit of research i decided against it. My question now is what is the best way to implement this as a .NET application? Should I create a Windows service? A console application that is scheduled to run once per night? Some other cool way that I don't even know about?

    Read the article

  • What is the difference between panic and an assert?

    - by acidzombie24
    Go doesn't provide assertions. They are undeniably convenient, but our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting. However it has print and println which does panic like print, aborts execution after printing panicln like println, aborts execution after printing Isnt that the same thing as an assert? Why would they claim the above but have panic? i can see it leading to the same problems but adding an error msg to the end of it which can easily be abused. Am i missing something?

    Read the article

  • How Does the VS 2010 web.config work?

    - by chobo2
    Hi I am just wondering in VS2010 the web.config is broken up into web.config web.debug.config web.release.config So from what I gathered is the web.config is just like the master template. So I am guessing in my debug I could put things like my local database where in my release one I would put my server database. Now how does it know when to use the release version or debug version? I also here that you can have more than 2. How does that work?

    Read the article

  • How do i find out in sql what db name I'm connect to

    - by gjutras
    We have a change control environment where the developers give scripts to change control people to run. we have dev,qa, & production environments. I want to conditionalize a couple segments to do some different things depending on what database the change control person is running my script. If @dbname='dev' then begin --do some dev stuff end If @dbname='QA' then begin --do some qa stuff end If @dbname='Prod' then begin --do some production stuff end How do I get at what the current connected database is and fill @dbname?

    Read the article

  • Using memcached/APC for session storage?

    - by Industrial
    Hi everybody, I had some thoughts back ago about using memcached for session storage, but came to the conclusion that it wouldn't be sufficient in the event of one or more of the servers in the memcached pool were about to go down. A hybrid version is to save the main database (mySQL) from load caused by reads would be to work out a function that tries to fetch the data from the cache pool, and if that fails gets it from the database. After putting some more thought into it, I started to think about using APC cache for session related data. If our web server would go down, sessions would be lost either way, so storing them in a local APC or a localhost memcached server maybe isn't that bad? What's your experiences?

    Read the article

  • How to export javascript generated report to pdf?

    - by Parhs
    Hello... I made a reporting engine with javascript for my project... The problem is with printing.. Although with page-break and css i can produce a good looking report, i want to export that report to a pdf in order to be printed better without ,the url,page title and other stuff that browsers add. Note in Chrome there isnt page setup!!! I am using java for server side. i think sending via ajax the Html of the report somehow and return a url for the pdf report maybe... i am looking for a good tool for this thank you

    Read the article

  • Have an external java application by notified of changes to Entity EJBs in JBoss AS

    - by John
    I'm trying to connect an external application to a JBoss AS container. The external application is a Java application that is currently being notified of changes to database entities through a JMS topic. I've added an EntityLifecycleListener class to all my entities that publishes a serialized (and unwrapped) copy of the entity to the JMS topic. The problem is that this implementation ignores the transaction boundaries of the JBoss container. For example, the @PostUpdate event can be fire, generating the JMS message for that entity, but the transaction could be rolled back causing the external application to be notified of an invalid change and become unsync'd. I need my external application to only be notified of successful commits to the database, but I need to be able to publish the entire java POJO to the external application. Is there an official way of doing this?

    Read the article

  • Is there any tool which can show the call tree for SQL stored procedures

    - by DBZ_A
    I have a huge SQL script which i need to analyse. It would be really helpful if i could find a tool which can generate a call tree; ie, to see which all procedures are called from a particular procedure. a perl based example is here, http://sqlblog.com/blogs/linchi_shea/archive/2009/10/23/find-the-complete-call-tree-for-a-stored-procedure.aspx but i need a tool to analyse the text file (.sql file), not the procedure stored in the database. due to some reasons i will not be able to create the whole set of procedures in the database and use the above mentioned tool. please respond if you have come across any ide/tool with this feature.

    Read the article

  • antlr: Best practice to integrate generated parser into the system

    - by green
    Here is the background, I am trying to create a DSL to allow customer write simple scripts to query into our mongodb based database. I choose antlr to implement the DSL. From my understanding (and pls let me know if it's not correct) there are 2 approaches to integrate antlr generated parser into the system: Embed code into the grammar file so that the generated parser could be used directly to make query to the database and return result in a certain format (e.g. json encoded) Keep the parser purely a parser, after feed the DSL file to it, and construct the query in another class by retrieving the AST from generated parser class So antlrers, which one do you think is the way I as an antlr newbie should go? Can you list the pros and cos of each approach, or you have other way to recommend?

    Read the article

  • Query useing two databases in SQL Report Builder

    - by user912447
    I am new to SQL Server Report Builder 2.0 and I need to compare two different databases in one query. Basically I need to check if values from one database table exist in a different database's table. I know I can add multiple Datasources to my report and access each one with Subreports, but each DataSet that I create can only have one query in it. So how can I go about using one query to access two databases? Or if there is another way to somehow join my results from multiple DataSets, that would work too. Also, the databases are on the same server.

    Read the article

  • MySQL Hibernate sort on 2 columns

    - by sammichy
    I have a table as follows Table item { ID - Primary Key content - String published_date - When the content was published create_date - When this database entry was created } Every hour (or specified time interval) I run a process to update this table with data from different sources (websites). I want to display the results according to the following rules. 1. The entries created each time the process runs should be grouped together. So the entries from the 2nd process run will always be after the entries from the first process run even if the published_date of an entry from the first run is after the published_date of an entry from the 2nd run. 2. Within the grouping by run, the entries by sorted by published_date 3. Another restriction is that I prefer that data from the same source not be grouped together. If I do the sort by create_date, published_date I will end up with data from source a, data from source b etc. I prefer that the data within each hour be mixed up for better presentation If I add a column to this table and store a counter which increments each time the process is run, it is possible to create a query to sort first by counter and then by published_dt. Is there a way to do it without adding a field? I'm using Hibernate over MySQL. e.g. Hour 1 (run 1) 4 rows collected from site a (rows 1-4) 3 rows collected from site b (rows 5-7) hour 2 (run 2) 2 row collected from site a (rows 8-9) 3 rows collected from site b (rows 10-12) ... After each run, new records are added to the database from each website. The create date is the time when the record was created in the database. The published date is part of the content and is read in from the external source. When the results are displayed I would like rows to be grouped together based on the hour they were published in. So rows 1-7 would be displayed before rows 8-12. Within each hourly grouping, I would like to sort the results by published date (timestamp). This is necessary so that the posts from all the sites collected in that hour are not grouped together but rather mixed in with each other.

    Read the article

  • How to delete unused sequences?

    - by user1023877
    We are using PostgreSQL. My requirement is to delete unused sequences from my database. For example, if I create any table through my application, one sequence will be created, but for deleting the table we are not deleting the sequence, too. If want to create the same table another sequence is being created. Example: table: file; automatically created sequence for id coumn: file_id_seq When I delete the table file and create it with same name again, a new sequence is being created (i.e. file_id_seq1). I have accumulated a huge number of unused sequences in my application database this way. How to delete these unused sequences?

    Read the article

  • sql: DELETE + INSERT vs UPDATE + INSERT

    - by user93422
    A similar question has been asked, but since it always depends, I'm asking for my specific situation separately. I have a web-site page that shows some data that comes from a database, and to generate the data from that database I have to do some fairly complex multiple joins queries. The data is being updated once a day (nightly). I would like to pre-generate the data for the said view to speed up the page access. For that I am creating a table that contains exact data I need. Question: for my situation, is it reasonable to do complete table wipe followed by insert? or should I do update,insert? SQL wise seems like DELETE + INSERT will be easier (it is single SQL expression). EDIT: RDBMS: MS SQL Server 2008 Ent

    Read the article

  • How to use GPS data like the double value returned by getLatitude()?

    - by Dan
    I have been searching quite a bit for an answer, but maybe I'm just not using the correct terminology. I am creating an app that will access a database to return a list of other users that are within a certain distance of the users location. I've never worked with this type of data, and I don't really know what the values mean. I'd like to do all the calculations on the backend with either MySQL or PHP. Currently, I am storing the latitude and longitude as doubles within the database. I can access them and store them, but I have no idea how I might be able to sort them based on distance. Perhaps I should be using a different type or some technique that is common in this area. TIA.

    Read the article

  • How to persist every new entity?

    - by simpatico
    I expect every instantiated entity to correspond to a tuple (& co) in the database. In the examples I see around, one always instantiates the entity (via a constructor) and then calls persist with that entity. I find this error-prone, and was wondering if it wasn't possible to have every instantiated entity automatically managed/persisted/reflected to the database (at least intended to). This also seems to prevent me from persisting instance variable entities. I.e. I've an entity which instantiates another (entities it has an association with) in its constructor.

    Read the article

  • PHP+MYSQL Server Config

    - by Matias
    Hi guys, I am parsing an XML file with PHP and inserting the rows in a MYSQL database. I am using PHP simplexml_load_files to load the XML and a foreach to loop through the array and insert the rows into my database. It works perfectly fine with small files i am testing, but it comes to reality I need to parse a large 500mb XML file and nothing happens. I was wondering what was the right Php.ini config for this case ? I have a VPS Linux Cent OS, with 256 mb of dedicated Memory and MYSQL 5.0.5. I have also set php memory_limit = 256M (maximum of my server) Any suggestions, similar experiences will be greatly appreciated Thanks

    Read the article

  • MySQL COUNT() multiple columns

    - by liam
    Hello, I'm trying to fetch the most popular tags from all videos in my database (ignoring blank tags). I also need the 'flv' for each tag. I have this working as I want if each video has one tag: SELECT tag_1, flv, COUNT(tag_1) AS tagcount FROM videos WHERE NOT tag_1='' GROUP BY tag_1 ORDER BY tagcount DESC LIMIT 0, 10 However in my database, each video is allowed three tags - tag_1, tag_2 and tag_3. Is there a way to get the most popular tags reading from multiple columns? The record structure is: +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | flv | varchar(150) | YES | | NULL | | | tag_1 | varchar(75) | YES | | NULL | | | tag_2 | varchar(75) | YES | | NULL | | | tag_3 | varchar(75) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+

    Read the article

  • C# Spawn Multiple Threads for work then wait until all finished

    - by pharoc
    just want some advice on "best practice" regarding multi-threading tasks. as an example, we have a C# application that upon startup reads data from various "type" table in our database and stores the information in a collection which we pass around the application. this prevents us from hitting the database each time this information is required. at the moment the application is reading data from 10 tables synchronously. i would really like to have the application read from each table in a different thread all running in parallel. the application would wait for all the threads to complete before continuing with the startup of the application. i have looked into BackGroundWorker but just want some advice on accomplishing the above. Does the method sound logical in order to speed up the startup time of our application How can we best handle all the threads keeping in mind that each thread's work is independent of one another, we just need to wait for all the threads to complete before continuing. i look forward to some answers

    Read the article

  • SQL - Dervied Foreign Key - Possible?

    - by Chad
    I'm just curious if this is possible, specifically in SQL CE (Express) with support in .NET's Entity Framework: Table1 (primary) -nvarchar(2000) url -... Table2 (with foreign key) -nvarchar(2000) domain -... foreign key on Table2.domain references Table1.url such that Table.url contains Table2.domain e.g. Table1: http://www.google.com/blah/blah http://www.cnn.com/blah/ http://www.google.com/foo Table2: google.com cnn.com Is it possible for this to be scripted and enforced by SQL CE (let alone any relation database) and, if so, can .NET's Entity Framework automatically support this if I import my database into a model?

    Read the article

  • Accessing TextView from another class

    - by Jenny
    I've got my main startup class loading main.xml but I'm trying to figure out how to access the TextView from another class which is loading information from a database. I would like to publish that information to the TextView. So far I've not found any helpful examples on Google. EDIT: This is my class that is doing the Database work: import android.widget.TextView;import android.view.View; public class DBWork{ private View view; ... TextView tv = (TextView) view.findViewById(R.id.TextView01); tv.setText("TEXT ME") Yet, everytime I do that I get a nullpointerexception

    Read the article

  • Should we be giving the client's management team direct access to our git hub repository so that the

    - by SharePoint Newbie
    Hi, We are presently working for a client who is new to working with distributed teams. We have teams spread across India and the UK. Although we have decent project tracking tools (Mingle), would it be a good idea to the give the PM at the client access to our git hub repo. Would this be make it easier for them (see what the devs are working on and an insight into what the team has been developing). I agree that noot all commit messages would make sense to them but would this be a good way to boost their confidence in what we are doing? They already can check out our fortnightly releases on our QA and UA environments, but this still is behind dev by 5-6 days. Also, is there any reporting for git hub which makes it easier for PM types to make sense of it all? Thanks

    Read the article

  • What is the reliable way to return error code from an MPI program?

    - by mezhaka
    The MPI standard (page 295) says: Advice to users. Whether the errorcode is returned from the executable or from the MPI process startup mechanism (e.g., mpiexec), is an aspect of quality of the MPI library but not mandatory. Indeed I had no success in running the following code: if(0 == my_rank) { FILE* parameters = fopen("parameters.txt", "r"); if(NULL == parameters) { fprintf(stderr, "Could not open parameters.txt file.\n"); printf("Could not open parameters.txt file.\n"); exit(EXIT_FAILURE); //Tried MPI_Abort() as well } fscanf(parameters, "%i %f %f %f", N, X_DIMENSION_Dp, Y_DIMENSION_Dp, HEIGHT_DIMENSION_Dp); fclose(generation_conf); } I am not able to get the error code back into the shell in order to make a decision on further actions. Neither of two error messages are printed. I think I might write the error codes and messages to a dedicated file. Has anyone ever had a similar problem and what were the options you've considered to do a reliable error reporting?

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >