Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 654/1300 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • Jobs magically disappear from queue (delayed_job mongoid 2 on heroku)

    - by Hayk Saakian
    lets say i do something like arrs = Article.where(:body => nil) i'll have arrs.count is let's say 900 and i do arrs.each do |ar| ar.delay.download_via_diffbot #a method that takes some time, does some http, and writes a non-nil value to ar.body end now i'll watch the logs, and a wait a few minutes on ~5 dynos do the jobs, and do a count again: arrs.count is now ~800 so wtf, i thought i just told my workers to do ~900 jobs, what happened to the other 800? i can confirm that i'm only making ~100 HTTP requests b/c the api reporting shows me this, also simply watching the logs is telling enough that 900 jobs are not happening.

    Read the article

  • Gaining application/module context from a symfony task

    - by Martin Chatterton
    I have written a reporting suite, and I have a specific report that builds a CSV file. Serving this file via a browser on demand isn't an issue, but I need to be able to build this CSV file nightly, and email round a link to be able to download it. Essentially, I need to be able to replace a specific action with a symfony task, run via cron. So how do I gain application/module context from a symfony task? And secondly, how would I invoke the SwiftMailer library from a symfony task? I'm using symfony v1.4.4 and PHP v.5.2.13. Thanks in advance for your help.

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • What is the fastest way to get the persisted object after calling Hibernate's saveOrUpdate?

    - by Dave
    I'm using Hibernate 3.2.1.ga, hibernate annotations 3.2.1.ga, and hibernate-jpa-2.0-api. I can't upgrade at this time as I'm working with legacy code. I have this generic method for saving or updating objects ... protected void saveOrUpdate(Object obj) { final Session session = sessionFactory.getCurrentSession(); session.saveOrUpdate(obj); } You can assume that every argument, "obj," will have a member field that is marked with the "@Id" annotation. I would like to change the return type to return an Object that represents the persisted object in the database (meaning if "obj" didn't contain an id before, what is returned is the database object with a populated id. What is the fastest way to do this given my versioning and generic constraints?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • ODBC Storage Size

    - by dcp3450
    I'm pulling a lot of text from a MS SQL Server database. I'm not getting all the text (which includes some html. The text is stored perfectly on the database. However, when I run the query to get the data It will only pull part of the text. I pull the data using odbc_exec and store using $variable = odbc_result($runquery,"body"). if i display the content with odbc_result_all($runquery) i get part of the content. if I use echo $body; i get part of the content then some garbage and part of the text from the begining. very strange response. Is there a size limit? Any ideas what I'm missing here?

    Read the article

  • TCP 3 way handshake

    - by Tom
    Hi, i'm just observing what NMAP is doing for the 3 ports it reports are open. I understand what a half-scan attack is, but what's happening doesnt make sense. NMAP is reporting ports 139 are 445 are open..... all fine. But when i look at the control bits, NMAP never sends RST once it has found out the port is open, It does this for port 135- but not 139 and 445. This is what happens: (I HAVE OMITTED THE victim's replies) Sends a 2 (SYN) Sends a 16 (ACK) Sends a 24 (ACK + PST) Sends a 16 (ACK) Sends a 17 (ACK + FIN) I dont get why NMAP doesnt 'RST' ports 139 and 445??

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • Php code not executing - dies out when trying to refer to member of static class - no error displaye

    - by Ali
    I'm having some problems with this piece of code. I've included a class declaration and trying to create an object of that class but my code dies out. It doesn't seem to be an include issue as all the files are being included even the files called for inclusion within the class file itself. However the object is not created - I tried to put an echo statement in the __construct function but nothing it just doesn't run infact doesn't create the object and the code won't continue from there - plus no error is reported or displayed and I have error reporting set to E_ALL and display errors set to true WHats happening here :( =============EDIT SOrry I checked again the error is prior to teh object creation thing - it dies out when it tries to refer to a constant in a static class like so: $v = Zend_Oauth::REQUEST_SCHEME_HEADER; THis is the class or part of it - it has largely static functions its the Zend Oauth class: class Zend_Oauth { const REQUEST_SCHEME_HEADER = 'header'; const REQUEST_SCHEME_POSTBODY = 'postbody'; const REQUEST_SCHEME_QUERYSTRING = 'querystring'; // continued LIke I said no error is being reported at all :(

    Read the article

  • How to nicely inform to the user that an unknown error has happened?

    - by Jaime Soriano
    There are several guidelines for error reporting, that are usually based on giving to the user useful information when he or she does something wrong, but to give this kind of information you need to be handling the error and know that it can happen. There are also tons of articles about designing 404 error pages. But, what can you do when it's a new, unhandled error provoked by a failure in the shoftware? Are there some guidelines about how to nicely report totally unexpected errors in a web site, as an unexpected error 500? What header message should be shown in that case? something like "Sorry, an unexpected error has ocurred" would be enough? What information should be given? Should it have mechanisms to help to report the failure to developers? Which ones?

    Read the article

  • How to maipulate the shell output in php

    - by Mirage
    I am trying to write php script which does some shell functions like reporting. So i am starting with diskusage report I want in following format drive path ------------total-size --------free-space Nothing else My script is $output = shell_exec('df -h -T'); echo "<pre>$output</pre>"; and its ouput is like below Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext3 92G 6.6G 81G 8% / none devtmpfs 3.9G 216K 3.9G 1% /dev none tmpfs 4.0G 176K 4.0G 1% /dev/shm none tmpfs 4.0G 1.1M 4.0G 1% /var/run none tmpfs 4.0G 0 4.0G 0% /var/lock none tmpfs 4.0G 0 4.0G 0% /lib/init/rw /dev/sdb1 ext3 459G 232G 204G 54% /media/Server /dev/sdb2 fuseblk 466G 254G 212G 55% /media/BACKUPS /dev/sda5 fuseblk 738G 243G 495G 33% /media/virtual_machines How can i convert that ouput into my forn\matted output

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • Filemaker XSL 20sec Query Latency

    - by Ian Wetherbee
    I have an ASP frontend that loads data from a Filemaker database using XSL to perform simple queries. The problem is that the first page load takes 20 seconds +/- 200ms, then the next few page refreshes within a minute of the first request take <200ms, then the cycle starts over again. Each page load makes only 2 XSL queries, and they execute fast after the first page load, so what is causing the delay on the first page load? I have caching turned up with a 100% hit rate, and number of connections at 100. I've tried with XSL database sessions on and off, and session time anywhere from 1 to 60 minutes without any changes. The XSL loads from ASP use a GET request and add a Basic Authorization header to authenticate each time. During fast page requests, the fmserver.exe and fmswpc.exe processes don't even flinch, but during a 20 second holdup I see fmserver jump to 30% CPU and a 3mb I/O read a few seconds into the request, and occasionally fmswpc jump to 60% CPU.

    Read the article

  • Entity Framework This property descriptor does not support the SetValue

    - by Gayan
    Hello guys, below are my entities which i have created using entity frame work. retailer id name childs(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [Name] nvarchar NOT NULL childern id name RETAILER(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [name] nvarchar NOT NULL [Retailer_Id] [int] NOT NULL, As you can see in the above model the relationship is 1 retailer can have 0 or 1 child. my problem is when i create a new child and set the retailer navigation property of it to a retailer entity it throws the following exception.how do i solve it Error while setting property 'retailer': 'This property descriptor does not support the SetValue method.'.

    Read the article

  • SQL Server 2000 - Filter by String Length

    - by user208662
    Hello, I have a database on a SQL Server 2000 server. This database has a table called "Person" that has a field call "FullName" that is a VARCHAR(100). I am trying to write a query that will allow me to get all records that have a name. Records that do not have a name have a FullName value of either null or an empty string. How do I get all of the Person records have a FullName? In other words, I want to ignore the records that do not have a FullName. Currently I am trying the following: SELECT * FROM Person p WHERE p.FullName IS NOT NULL AND LEN(p.FullName) > 0 Thank you

    Read the article

  • jquery attribute indexOf

    - by Victor
    When I am getting at an attribute onclick of custom(Reporting Services) checkbox it gives me correct result. However when I am trying to use indexOf on that result it says "Object doesn't support this property or method", i.e. this is fine, gives me a long string $('input[id*=CustomCheckBox]').click(function() { alert( $(this).attr("onclick") ); }); But this gives an error(object doesn't support this property or method): $('input[id*=CustomCheckBox]').click(function() { if ($(this).attr("onclick").indexOf("SomeString") > -1 ) { //do some processing here } } What would I need to modify so that indexOf is working properly?

    Read the article

  • How to write custom SQLite functions in Javascript inside a Webkit browser?

    - by Jay Godse
    I have just learned how to use the SQLite database for local storage in a Webkit web browser (e.g. Google Chrome or Apple Safari) using the Javascript API. For example the "Sticky Notes" application. However, I know that SQLite has a function called sqlite_create_function() that lets you add custom functions to your instance of SQLite on the fly which can then be used inside SQL queries. This function is described at sqlite.org. I also know that you can call an equivalent of this API in Ruby as described here. QUESTION: Can anybody show me how to do this in Javascript - i.e. write a custom function in Javascript that can be bound into the SQLite database at run time to be called by the SQLite engine, and all inside a Webkit browser?

    Read the article

  • db2 jdbc driver does not release table locks

    - by as
    situation: We have a web service running on tomcat accessing DB2 database on AS400, we are using JTOPEN drivers for JNDI connections handled by tomcat. For handling transactions and access to database we are using Spring. For each select system takes JDBC connection from JNDI (i.e. from connection pool), does selection, and in the end it closes ResultSet, Statement and releases Connection in that order. That passes fine, shared lock on table dissappears. When we want to do update the same way as we did with select (exception on ResultSet object, we don't have one in such situation), after releasing Connection to JNDI lock on table stays. If we put maxIdle=0 for number of connections in JNDI configuration, this problem disappears, but this degrades performances, we have cca 100 users online on that service, we need few connections to be alive in pool. What do you suggest?

    Read the article

  • Entering Complex Data into Access

    - by DataMakesMeCrazy
    Fairly new to Access and trying to do something that seems simple, but may be very complex. I want to create a database of projects, each project has several phases (ie proposal, marketing, etc) and that will allow for multiple employees to work on a single project. Ie Bob and John are working on project number 102. From here, i would like to enter the forecasted start and end dates for each phase of the project, and enter the forecasted number our hours each employee will be allowed to work on that phase of that project ie. Project - Employee - Phase - Start - End - (list weeks) 102 - Bob - Marketing - 12-May-10 - 21-May-10 - 3 - 5 (3 hours first week, 5 hours the second) and so on Basically would all this data be on one table, or several? And can access dynamically show the weeks between the start and end date so that i can input the hours? I feel this database will become severely complicated :S Thanks, J

    Read the article

  • ASP.NET web services leak memory when (de)serializing disposable objects?

    - by Serilla
    In the following two cases, if Customer is disposable (implementing IDisposable), I believe it will not be disposed by ASP.NET, potentially being the cause of a memory leak: [WebMethod] public Customer FetchCustomer(int id) { return new Customer(id); } [WebMethod] public void SaveCustomer(Customer value) { // save it } This flaw applies to any IDisposable object. So returning a DataSet from a ASP.NET web service, for example, will also result in a memory leak - the DataSet will not be disposed. In my case, Customer opened a database connection which was cleaned up in Dispose - except Dispose was never called resulting in loads of unclosed database connections. I realise there a whole bunch of bad practices being followed here (its only an example anyway), but the point is that ASP.NET - the (de)serializer - is responsible for disposing these objects, so why doesn't it? This is an issue I was aware of for a while, but never got to the bottom of. I'm hoping somebody can confirm what I have found, and perhaps explain if there is a way of dealing with it.

    Read the article

  • I want to retrieve some information based on Caller ID

    - by Hassan Al-Jeshi
    Hello, my friend has a Real Estate company that receives a lot of phone calls everyday. He wants to have a solution such that when somebody call to his company, the operator sees all the information about the person who is calling based on the database he have right now and the caller ID. Is there a ready made software or solution that can do the job?? Since I'm a software engineer myself, I would never mind developing something from scratch or built on a ready made system (with a team of course), but I need some direction on how to start?? Notes: 1- cost is not an issue 2- the customers database is there but we never mind replacing it in a format to suit the new solution Best Regards,

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >