Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 986/1300 | < Previous Page | 982 983 984 985 986 987 988 989 990 991 992 993  | Next Page >

  • Cannot add table to context - LINQ-To-SQL

    - by Oskar Kjellin
    Hey, I'd be happy to give you more info if you need, just ask for it. I have in my database a table of articles, a table of tags and a link table. The article table has values like Id, Subject etc etc, the tags only have Id and Tag. The link has TagId and ArticleId. The problem is that when I drag drop the link-table nothing happens! This all worked before I decided to rename my column in the tables from "ID" to "Id" to correct spelling. Thanks in advance!

    Read the article

  • SQL datasource for gridview

    - by Karsten
    Hi I want to use a gridview with sorting and paging to display data from an SQL server, the query uses 3 joins and the full text search containstable. The from part of the query uses all 3 tables in the join. What is the best way to do this? I can think of a stored procedure, SQL directly in the SQLDataSource and creating a view in the database. I want good performance and would like to leverage the automatic sorting and paging features of the gridview as much as possible.

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • How does TransactionScope guarantee data integrity across multiple databases?

    - by Bas Smit
    Hey guys, Can someone tell me the principle of how TransactionScope guarantees data integrity across multiple databases? I imagine it first sends the commands to the databases and then waits for the databases to respond before sending them a message to apply the command sent earlier. However when execution is stopped abruptly when sending those apply messages we could still end up with a database that has applied the command and one that has not. Can anyone shed some light on this? Edit: I guess what Im asking is can I rely on TransactionScope to guarantee data integrity when writing to multiple databases in case of a power outage or a sudden shutdown. Thanks, Bas Example: using(var scope=new TransactionScope()) { using (var context = new FirstEntities()) { context.AddToSomethingSet(new Something()); context.SaveChanges(); } using (var context = new SecondEntities()) { context.AddToSomethingElseSet(new SomethingElse()); context.SaveChanges(); } scope.Complete(); }

    Read the article

  • Using Silverlight 4 with ODBC

    - by user1384831
    I'm completely new to Silverlight and I want to connect to a Netezza database with an ODBC connection and pull records to display nicely in Silverlight. What's the easiest way to do this? From some research, it seems creating a WCF RIA service is what most people do ( http://www.codeproject.com/Articles/354715/Creating-a-WCF-RIA-Services-Class-Library-for-a-Si ) but the process seems a bit convoluted. Coming from an ASP.net background, could I do something simpler like creating an ODBC connection in the code-behind (using System.Data.ODBC functionality), executing a query, storing the returned records in a Datatable and then binding that to some Silverlight control?

    Read the article

  • Java exception: "Can't get a Writer while an OutputStream is already in use" when running xAgent

    - by Steve Zavocki
    I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing -- Can't get a Writer while an OutputStream is already in use The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent. I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.

    Read the article

  • Error 2006: "MySQL server has gone away" using Python, Bottle Microframework and Apache

    - by Jamie
    After accessing my web app using: - Python 2.7 - the Bottle micro framework v. 0.10.6 - Apache 2.2.22 - mod_wsgi - on Ubuntu Server 12.04 64bit; I'm receiving this error after several hours: OperationalError: (2006, 'MySQL server has gone away') I'm using MySQL - the native one included in Python. It usually happens when I don't access the server. I've tried closing all the connections, which I do, using this: cursor.close() db.close() where db is the standard MySQLdb.Connection() call. The my.cnf file looks something like this: key_buffer = 16M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 It is the default configuration file except max_allowed_packet is 128M instead of 16M. The queries to the database are quite simple, at most they retrieve approximately 100 records. Can anyone help me fix this? One idea I did have was use try/except but I'm not sure if that would actually work. Thanks in advance, Jamie

    Read the article

  • MYSQL: Like Method, Similar Words - But Don't Show the Searched Word

    - by elmaso
    Hello, actually i use this method to show similar words for a search request.. $query = "SELECT * FROM searches WHERE Query LIKE '%$search%' ORDER BY Query"; if someone searches for "nelly" it looks up in the database for similar words "nelly furtado, nelly ft. kelly"... but i dont want to show up the searched word.. example: you've searched for nelly - try this too: nelly, nelly furtado, nelly ft., the bold word should not showed up again, because it's the searched word.. is there maybe a method with MATCH AGAINST? thank you!

    Read the article

  • Subsonic 3.0 Query limit with MySQL c#.net LinQ

    - by omegawkd
    Hello, a quick question which may or may not be easily answered. Currently, in order to return a limited result set of data to my calling reference using SubSonic I use a similar function as below: _DataSet = from CatSet in t2_aspnet_shopping_item_category.All() join CatProdAssignedLink in t2_aspnet_shopping_link_categoryproduct.All() on CatSet.CategoryID equals CatProdAssignedLink.CategoryID join ProdSet in t2_aspnet_shopping_item_product.All() on CatProdAssignedLink.ProductID equals ProdSet.ProductID where ProdSet.ProductID == __ProductID orderby CatProdAssignedLink.LinkID ascending select CatSet; and select the first item from the data set. Is there a way to limit the lookup initially to a certain amount of rows? I'm using MySQL as the base database.

    Read the article

  • How do I reduce number of redundant requests with mod_perl properly?

    - by rassie
    In a fairly big legacy project, I've refactored several hairy modules into Moose classes. Each of these modules requires database access to (lazy) fetch its attributes. Since those objects are used pretty heavily, I want to reduce the number of redundant requests, for example for unchanged data. Now, how do I do that properly? I've got several alternatives: Implement caching in my Moose classes via a role to store them in memcached with expiration of 5-10 minutes (probably not too difficult, but tricky with lazy attributes) update: KiokuDB could probably help here, have to read up about attributes Migrate to DBIx::Class (needs to be done anyway) and implement caching on this level (DBIC will probably take most of the pain away just by itself) Somehow make my objects persist inside the mod_perl process (no clue how to do this :() How would you do this and what do you consider a sane way? Is caching data preferred on object or the ORM level?

    Read the article

  • Update an infopath field with return value from webservice submit?

    - by Nick
    Is it possible to update an infopath field with the result of a call to submit to a webservice? We have an infopath form used to create items in the database. I would like to add a read only field for the id (primary key) of the item in the infopath form which is filled when the form is submitted to the webservice by the return value. Is there a way to use the return value as part of a rule with a "Set a field's value" action? I could not find a way to do this with rules gui. Is it possible to do this using c# code? Or am I missing something in the GUI?

    Read the article

  • C# WinForms & SQL bindings

    - by vent
    I have a MSSQL database with many tables and relations. I want to bind it to my C# GUI application with text- and combo-boxes. Which one is the best way to do it: (1) Everything done manually by own classes and methods: Import data to dataset or to many datatables separately, then create dictionaries to comboboxes, manually retrieve data to textboxes and create a sql statement if update action will be invoked, or, (2) Done automatically by VS: Connect my controls with BindingSource and BindingNavigator if needed. I know how to deal with the first one, but if the second way would be better, my question is — how to achieve it? I haven't dealt with automatically created databindings and I don't know how to tell to DataBinding property (Key|Value|Text) about relations and current row's ID. And what is the method to cancel/update changes in all GUI elements? I need a simple solution for this scenario. It's a quick-and-dirty academical project, which must only "work" and be as fast implemented as it can. Thanks

    Read the article

  • Port a live system from App Engine Helper to App Engine Patch

    - by Alexander
    I am running a live system that is currently serving about 20K pages a day which is based on App Engine Helper (Python) with session support provided by AppEngine utilities. One problem that I have been having is that sessions are occasionally randomly logging out. I would like to try using the App Engine Patch, since it has "native" django session support, but I am worried that this is possibly going to be like doing a brain transplant. Specifically, current database models are all inhereted from BaseModel provided by the App Engine Helper. While, App Engine Patch does not have this inheritance. Does anyone know if it is possible to migrate a live system from App Engine Helper to App Engine Patch? If so, do you have any advice or warnings that I should heed, before attempting this transition? Thank you and kind regards Alex

    Read the article

  • "detached entity passed to persist error" with JPA/EJB code

    - by zengr
    I am trying to run this basic JPA/EJB code: public static void main(String[] args){ UserBean user = new UserBean(); user.setId(1); user.setUserName("name1"); user.setPassword("passwd1"); em.persist(user); } I get this error: javax.ejb.EJBException: javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist: com.JPA.Database Any ideas? I search on the internet and the reason I found was: This was caused by how you created the objects, i.e. If you set the ID property explicitly. Removing ID assignment fixed it. But I didn't get it, what will I have to modify to get the code working?

    Read the article

  • Django South Foreign Keys referring to pks with Custom Fields

    - by Rory Hart
    I'm working with a legacy database which uses the MySQL big int so I setup a simple custom model field to handle this: class BigAutoField(models.AutoField): def get_internal_type(self): return "BigAutoField" def db_type(self): return 'bigint AUTO_INCREMENT' # Note this won't work with Oracle. This works fine with django south for the id/pk fields (mysql desc "| id | bigint(20) | NO | PRI | NULL | auto_increment |") but the ForeignKey fields in other models the referring fields are created as int(11) rather than bigint(20). I assume I have to add an introspection rule to the BigAutoField but there doesn't seem to be a mention of this sort of rule in the documentation (http://south.aeracode.org/docs/customfields.html). Update: Currently using Django 1.1.1 and South 0.6.2

    Read the article

  • Call a non static methode in a static SQLiteDatabase class

    - by Fevos
    i want to display a msg to the user (msg box or Toast) when exception happend in a static SQLite Database class that i use. the proplem is that i cant call a non static methode in a static class , how can i handle this. this is the class private static SQLiteDatabase getDatabase(Context aContext) { and i want to add something like this in the class when exception happen but context genertae the problem of reference to non static in static class. Context context = getApplicationContext(); CharSequence text = "Hello toast!"; int duration = Toast.LENGTH_SHORT; Toast toast = Toast.makeText(context, text, duration); toast.show();

    Read the article

  • Convert IEnumerable<dynamic> to JsonArray

    - by Burt
    I am selecting an IEnumerable<dynamic> from the database using Rob Conery's Massive framework. The structure comes back in a flat format Poco C#. I need to transform the data and output it to a Json array (format show at bottom). I thought I could do the transform using linq (my unsuccessful effort is shown below): using System.Collections.Generic; using System.Json; using System.Linq; using System.ServiceModel.Web; .... IEnumerable<dynamic> list = _repository.All("", "", 0).ToList(); JsonArray returnValue = from item in list select new JsonObject() { Name = item.Test, Data = new dyamic(){...}... }; Here is the Json I am trying to generate: [ { "id": "1", "title": "Data Title", "data": [ { "column1 name": "the value", "column2 name": "the value", "column3 name": "", "column4 name": "the value" } ] }, { "id": "2", "title": "Data Title", "data": [ { "column1 name": "the value", "column2 name": "the value", "column3 name": "the value", "column4 name": "the value" } ] } ]

    Read the article

  • Microsoft SQL 2005 - using the modulo operator

    - by cc0
    So I have a silly problem, I have not used much MSSQL before, or any SQL for that matter. I basically have a minor mathematical problem that I need solved, and I thought modulo would be good. I have a number of dates in the database, but I need them be rounded off to the closest [dynamic integer] (could be anything from 0 to 5000000) which will be input as a parameter each time this query is called. So I thought I'd use modulo to find the remainder, then subtract that remainder from the date. If there is a better way, or an integrated function, please let me know! What would be the syntax for that? I've tried a lot of things, but I keep getting error messages like integers/floats/decimals can't be used with the modulo operators. I tried casting to all kinds of numeric datatypes. Any help would be appreciated.

    Read the article

  • hundreds of databases sql server log shipping

    - by Oliver
    SQL Server 2005 Standard 64x, with 300+ tiny databases currently (5MB each), user base adds databases as needed. Want to implement log shipping for warm standby, but not via the wizard, since that looks like it adds 3 jobs (1 on primary, 2 on secondary) for each log-shipped database. Do I try to write my own or use something like Quest's LiteSpeed? Or am I being too squeamish about having hundreds of SQL Server Agent jobs and all of them firing off (or worse, would I have to try to time them)? All advice welcome.

    Read the article

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • Creating a short unique string for each unique long string

    - by king.net
    I'm trying to create a url shortener system in c# and asp.net mvc. I know about hashtable and I know how to create a redirect system etc. The problem is indexing long urls in database. Some urls may have up to 4000 character length, and it seems it is a bad idea to index this kind of strings. The question is: How can I create a unique short string for each url? for example MD5 can help me? Is MD5 really unique for each string? NOTE: I see that Gravatar uses MD5 for emails, so if each email address is unique, then its MD5 hashed value is unique. Is it right? Can I use same solution for urls?

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Python - Linux - Connecting to MS SQL with Windows Credentials - FreeTDS+UnixODBC + pyodbc or pymssq

    - by Keith P
    There doesn't seem to be any great instructions for setting this up. Does anyone have any good instructions? I am a linux noob so be gentle. I did see another post that is similar, but no real answer. I have a couple of problems. FreeTDS doesn't "seem" to be working. I am trying to connect and I get the following message using the "tsql" command: "Default database being set to databaseName There was a problem connecting to the server" but it doesn't mention what the problem is. The error I get when I try to connect using pyodbc is: "pyodbc.Error: ('08S01', '[08S01] [unixODBC][FreeTDS][SQL Server]Unable to connect: Adaptive Server is unavailable or does not exist (20009) (SQLDriverConnectW)')" I tried something similar with pymssql, but I ran into similar issues. I keep getting errors that I can't connect, but it doesn't tell me why.

    Read the article

  • Linq to SQl over WCF Timesout after several calls

    - by Redeemed1
    I have a L2S Repository class which instantiates the L2S DataContext in its constructor. The repository is instantiated at run time (using Unity) in a service hosted in IIS with WCF. When I run up the client MVC applicaton the calls to the backend WCF service work for a while and then timeout. I suspected perhaps a database issue as I was depending on IIS garbage collection to dispose of unused DataContext instances in the IIS host but when I checked the characteristics of the problem I notice the following: The client makes the call to WCF but the WCF service does not respond. Next, the client times out Some time later (several minutes) the service actually executes the request by instantiating the repository and servicing the call. I have checked both client and server traces logs and only the client shows WCF errors (the timeout error). Where should I look? Is it something in WCF or is L2S possibly blocking with unfreed conenctions, resources etc.? Many thanks Brian

    Read the article

  • Sync Framework,LINQ, and my DAL

    - by Refracted Paladin
    I am creating a WPF app that needs to allow users to work in a temporary disconnected state and I plan to use a Local Database Cache. My question's are about my data access layer. Do you typically create the whole DAL to point at the Cache or both and create a switching mechanism? Is Entity's a good way to go for my DAL against the Cache? I am used to L2S but my understanding is that I can't use that against SQLCE, correct? Thanks! PS: Any good resources out there for using Sync, Linq, and WPF ALL TOGETHER? Tutorials, videos, etc?

    Read the article

< Previous Page | 982 983 984 985 986 987 988 989 990 991 992 993  | Next Page >