Search Results

Search found 42738 results on 1710 pages for 'document database'.

Page 167/1710 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Cannot login to Postgrest database despite setting password for user 'postgres'

    - by Serg
    I'm trying to use pgAdmin III to manage my Postgres database. Here are the commands I've run on my machine: sudo apt-get install postgresql Then I installed the pgAdmin III application: sudo apt-get install pgadmin3 Next I focused on setting my username and password in order to login: sudo -u postgres psql postgres Here I set my password \password postgres Finally I just created my database: sudo -u postgres createdb repairsdatabase When I try to login using pgAdmin III, I get the error: An error has occurred: Error connecting to the server: FATAL: Peer authentication failed for user "postgres"

    Read the article

  • Export ASPNETDB data to another Database.

    - by raziiq
    Hi there. I am developing in Visual Web developer 2008. I have SQLEXPRESS 2005 and SQL Management Studio 2008 installed on my PC. I purchased a Database MS SQL 2008 on DiscountASP.net. Since the host provides only 1 database and my project has 2 database. One is the ASPNETDB that contains the roles and user etc (created using the Website Configuration Wizard) and the other is my database containing data to my website and is named MainDB. As Host allows only 1 database so i exported my ASPNETDB's tables and stored procedures to my MainDB using aspnet_regsql.exe, but the problem is that stored procedures and tables are exported to my MainDB but data is not exported, i mean there are no users in the tables. My Question is how to export everything of ASPNETDB including stored procedures, tables and data to my MainDB??

    Read the article

  • SQL Server 2008 Copy Database Wizard: Fail

    - by Nai
    I am trying to use the SQL Server 2008 Copy Database Wizard to copy a SQL Server 2008 database. I am using the SQL Management Object method. However, the copy fails with the following error: ERROR : errorCode=-1073548784 description=Executing the query "/* '==============================================..." failed with the following error: "Cannot use a CONTAINS or FREETEXT predicate on table or indexed view 'Product' because it is not full-text indexed." Any ideas on how I can proceed with this will be super helpful Kind Regards Nai

    Read the article

  • Google Gears - Database - VACUUM

    - by Sirber
    With this code: var db = google.gears.factory.create('beta.database'); db.open('cominar'); db.execute('CREATE TABLE IF NOT EXISTS Ajax (AJAX_ID INTEGER PRIMARY KEY AUTOINCREMENT , MODULE TEXT, FUNCTION TEXT, CONTENT_JSON TEXT);'); db.execute('VACUUM;'); // nettoye la DB I'm trying to clean the database (VACUUM) the database at each initialisation but I get this error: Uncaught Error: Database operation failed. ERROR: authorization denied DETAILS: not authorized The database was created by me (the same page). Thank you!

    Read the article

  • How do continuously update data to an asp page?

    - by Lori
    Hi, I have an asp page based on a very simple database. It references a single table of probably 30 records and maybe 12 data fields and everything works great as I am only uploading a new database every week or so. I have a special circumstance where I would like upload new data to the database and display automatically on the page every 20 to 30 seconds without the user having to refresh their screen. I would expect up to 1000 concurrent users accessing the data. I have been manually uploading the database via ftp, which will obviously not work on this timeline and would also run the risk of error pages as the database is being replaced. So, can anyone point me the right direction to setup this scenario? Other details that might be helpful: The database is an Access database (but I could change to another format if needed) Running on Windows platform hosted by an ISP, not my own server Thanks in advance for any help on this! Lori

    Read the article

  • How would the conversion of a custom CMS using a text-file-based database to Drupal be tackled?

    - by James Morris
    Just today I've started using Drupal for a site I'm designing/developing. For my own site http://jwm-art.net I wrote a user-unfriendly CMS in PHP. My brief experience with Drupal is making me want to convert from the CMS I wrote. A CMS whose sole method (other than comments) of automatically publishing content is by logging in via SSH and using NANO to create a plain text file in a format like so*: head<<END_HEAD title = Audio keywords= open,source,audio,sequencing,sampling,synthesis descr = Music, noise, and audio, created by James W. Morris. parent = home END_HEAD main<<END_MAIN text<<END_TEXT Digital music, noise, and audio made exclusively with @=xlink=http://www.linux-sound.org@:Linux Audio Software@_=@. END_TEXT image=gfb@--@;Accompanying image for penonpaper-c@right ilink=audio_2008 br= ilink=audio_2007 br= ilink=audio_2006 END_MAIN info=text<<END_TEXT I've been making PC based music since the early nineties - fortunately most of it only exists as tape recordings. END_TEXT ( http://jwm-art.net/dark.php?p=audio - There's just over 400 pages on there. ) *The jounal-entry form which takes some of the work out of it, has mysteriously broken. And it still required SSH access to copy the file to the main dat dir and to check I had actually remembered the format correctly and the code hadn't mis-formatted anything (which it always does). I don't want to drop all the old content (just some), but how much work would be involved in converting it, factoring into account I've been using Drupal for a day, have not written any PHP for a couple of years, and have zero knowledge of SQL? How might a team of developers tackle this? How do-able is it for one guy in his spare time?

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • PHP/MySQL database connection priority?

    - by Josh
    Hello, I have a production database where usage statistics for a service we're working on. I use php to periodically roll up different resolutions (day, week, month, year) of interesting statistics in buckets dictated by the resolution. The php application I've written "completes" its data when its run, such that it will calculate all the rolled-up statistics for the resolutions and periods since it was last run. This is useful if we want to turn this off to debug database performance issues, because I can turn it back on and have it complete its data set. The problem I have, is the calculations are fairly intensive and drive the QPS of the production database server up. Is there a way to set a "priority" on a particular database connection so that it will only use "off-cycles" to do these calculations? Maybe a proper response would be to replicate the tables I'm working on into a different stats database, but, unfortunately I don't have the resources in place to attempt such a thing (yet). Thanks in advance for any help, Josh

    Read the article

  • Tool for documentation of the fields of a database ?

    - by Jerome WAGNER
    Hello, I need to add documentation to all fields of 2 databases (1 postgresql & 1 sql server), around 100 tables each. What tool would be convenient to do that (reverse the schema + add documentation manually on all fields) ? My favors would go to an open source tool with a graphical & xml output. Thanks for your help Jerome WAGNER

    Read the article

  • How do I align ReSharpers "cleanup code" with Visual Studio's "format document"

    - by Thomas Jespersen
    I'm a big fan of ReSharpers "cleanup code" feature. Especially the Solution wide clean up. But I use Visual Studio's Ctrl+K+D (Format document), it formats the code slightly differed than ReSharper. I'm on a quest to align ReSharper with Visual Studio (not the other way... because you can not share Visual Studio settings in the solution/source control system). So I'm after something like this: <Configuration> <CodeStyleSettings> <Sharing>SOLUTION</Sharing> <CSharp> <FormatSettings> <SPACE_AROUND_MULTIPLICATIVE_OP>True</SPACE_AROUND_MULTIPLICATIVE_OP> <SPACE_BEFORE_TYPEOF_PARENTHESES>False</SPACE_BEFORE_TYPEOF_PARENTHESES> </FormatSettings> </CSharp> </CodeStyleSettings> </Configuration> Which other settings will help ReSharper format code like Visual Studio?

    Read the article

  • Setting up nHibernate with an Oracle database and Visual Studio 2010

    - by Geoff
    I'm creating a .ASPNET project and I would like to setup nHibernate as my ORM tool. I will be using an existing oracle database and Visual Studio 2010. ORM tools are very new to me and really could use any advice to better understand the tool and the process required to implement them. I've been following an article at http://nhforge.org/wikis/howtonh/your-first-nhibernate-based-application.aspx to learn about it and am stuck where they say to create a local database as mine only give me the option to create a SQL server database (perhaps this a new for visual studio 2010?). Is the purpose of this database just to cache results from the live database? Thanks for your help! Geoff

    Read the article

  • XML document being parsed as single element instead of sequence of nodes

    - by Rob Carr
    Given xml that looks like this: <Store> <foo> <book> <isbn>123456</isbn> </book> <title>XYZ</title> <checkout>no</checkout> </foo> <bar> <book> <isbn>7890</isbn> </book> <title>XYZ2</title> <checkout>yes</checkout> </bar> </Store> I am getting this as my parsed xmldoc: >>> from xml.dom import minidom >>> xmldoc = minidom.parse('bar.xml') >>> xmldoc.toxml() u'<?xml version="1.0" ?><Store>\n<foo>\n<book>\n<isbn>123456</isbn>\n</book>\n<t itle>XYZ</title>\n<checkout>no</checkout>\n</foo>\n<bar>\n<book>\n<isbn>7890</is bn>\n</book>\n<title>XYZ2</title>\n<checkout>yes</checkout>\n</bar>\n</Store>' Is there an easy way to pre-process this document so that when it is parsed, it isn't parsed as a single xml element?

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • MySQL: Load database to memory

    - by Adam Matan
    Hi, Is there a way to load an entire MySQL database to the RAM, especially on en EC2 server? The database is quite small (~500 MegaBytes) I have enough memory Speed issues are crucial - the resulted queries are used to serve a dynamic webpage. Thanks, Adam

    Read the article

  • database importing problem with sql server

    - by tibin mathew
    Hi, I have a database working in mu local sql server 2005 express edition. I have to import my local dtabase to a remote servr database. For that i established connection to that remote server, and i can now see that database . but when i tried to restore database fro my local machine i'm getting an error message when i tried to give backup file location. Below is the error message The EXECUTE permission was denied on the object 'xp_availablemedia', database 'mssqlsystemresource', schema 'sys'. The user does not have permission to perform this action. The statement has been terminated. (Microsoft SQL Server, Error: 229) waht is the problem, how can i solve this. Please help me

    Read the article

  • XmlSlurper/NekoHTML document fragment parsing - No HTML or BODY tags wanted

    - by Misha Koshelev
    Dear All, I am trying to parse the following HTML fragment, and I would like to get the same fragment as output (without HTML and BODY tags). Is this possible? If so, how? Thank you Misha p.s. I am reading here: http://nekohtml.sourceforge.net/faq.html#fragments and I believe I have added the correct options below. However, the output is still incorrect :( Thank you Misha import groovy.xml.MarkupBuilder import groovy.xml.StreamingMarkupBuilder import groovy.util.XmlNodePrinter import groovy.util.slurpersupport.NodeChild def text=""" <div><h2>Test</h2> <div>Hi</div> </div> """ // Parse def config=new org.cyberneko.html.HTMLConfiguration() config.setFeature("http://cyberneko.org/html/features/balance-tags/document-fragment",true) def html=new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) // Output def printNode(NodeChild node) { def writer = new StringWriter() writer << new StreamingMarkupBuilder().bind { mkp.declareNamespace('':node[0].namespaceURI()) mkp.yield node } new XmlNodePrinter().print(new XmlParser().parseText(writer.toString())) } printNode(html) Output: <HTML> <tag0:HEAD xmlns:tag0="http://www.w3.org/1999/xhtml"/> <BODY> <DIV> <H2> Test </H2> <DIV> Hi </DIV> </DIV> </BODY> </HTML>

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • Exporting data from php page to word document

    - by udaya
    Hi I exported data from php page to word document but the problm is the header is not available in all pages I got the same problem while i am exporting the datas to pdf but i got the result for that one by using fpdf library In pdf i got the results like this ex page1 slno name 1 udaya 2 sankar In page 2 slno name 3 chendu 4 Akila I want the same kind of result in word how to get that This is the function i used function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "word") { $this-load-library('table'); $data['countrytoword'] = $this-AddEditmodel1-export(); $this-table-set_heading('Name','Country','State','Town'); $out = $this-table-generate($data['countrytoword']); header("Content-Type: application/vnd.ms-word"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-disposition: attachment; filename=$cur_date.doc"); echo ''; echo 'CountryList'; print_r($out); } } Name Country State Town

    Read the article

  • Changing SQL Server Database sorting

    - by plaisthos
    I have a request to change the collation of a SQL Server Database: ALTER DATABASE solarwind95 collate SQL_Latin1_General_CP1_CI_AS but I get this strange error: Meldung 5075, Ebene 16, Status 1, Zeile 1 Das 'Spalte'-Objekt 'CustomPollerAssignment.PollerID' ist von 'Datenbanksortierung' abhängig. Die Datenbanksortierung kann nicht geändert werden, wenn ein schemagebundenes Objekt von ihr abhängig ist. Entfernen Sie die Anhängigkeiten der Datenbanksortierung, und wiederholen Sie den Vorgang. Sorry for the german errror message. I do not know how to switch the language to english, but here is a translation: Translation: Message 5075, Layer 16, Status 1, Row 1 The 'column' object 'CustomPollerAssignment.PollerID' depends on 'Database sorting. The database sorting cannot be changed if a schema bound object depends on it. Remove the dependency of the database sortieren and retry. I got a ton more of the errors like that.

    Read the article

  • Linq to XML Document Traversal

    - by Perpetualcoder
    I have an xml document like this: <?xml version="1.0" encoding="utf-8" ?> <demographics> <country id="1" value="USA"> <state id ="1" value="California"> <city>Long Beach</city> <city>Los Angeles</city> <city>San Diego</city> </state> <state id ="2" value="Arizona"> <city>Tucson</city> <city>Phoenix</city> <city>Tempe</city> </state> </country> <country id="2" value="Mexico"> <state id ="1" value="Baja California"> <city>Tijuana</city> <city>Rosarito</city> </state> </country> </demographics> How do I setup LINQ queries for doing things like: 1. Get All Countries 2. Get All States in a Country 3. Get All Cities inside a state of a paricular country ? I gave it a try and I am kind of confused when to use Elements["NodeName"] and Descendants etc. I know I am not the brightest XML guy around. Is the format of the XML file even correct for simple traversal?

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • decompressing .gZ file from Document directory?

    - by senthilmuthu
    hi, i am having .gZ (zip file) in document directory.i want unZip it.but i am using libz.dylib framework .will it decompress and save all data to that file path?how can i get that extracted data?any has experienced in doing this?any help?when i use the method,but when i put break point, it returns data error(used in NSLog)--Z_DATA_ERROR-- - (id)initWithGzippedData: (NSData *)gzippedData; { [gzippedData retain]; if ([gzippedData length] == 0) return nil; unsigned full_length = [gzippedData length]; unsigned half_length = [gzippedData length] / 2; NSMutableData *decompressed = [[NSMutableData alloc] initWithLength:(full_length + half_length)]; BOOL done = NO; int status; z_stream strm; strm.next_in = (Bytef *)[gzippedData bytes]; strm.avail_in = [gzippedData length]; strm.total_out = 0; strm.zalloc = Z_NULL; strm.zfree = Z_NULL; if (inflateInit2(&strm, (15+32)) != Z_OK) { [gzippedData release]; [decompressed release]; return nil; } while (!done) { // Make sure we have enough room and reset the lengths. if (strm.total_out >= [decompressed length]) [decompressed increaseLengthBy: half_length]; strm.next_out = [decompressed mutableBytes] + strm.total_out; strm.avail_out = [decompressed length] - strm.total_out; // Inflate another chunk. status = inflate (&strm, Z_SYNC_FLUSH); if(status == Z_DATA_ERROR) { NSLog(@"data error"); } if (status == Z_STREAM_END) done = YES; else if (status != Z_OK) break; } if (inflateEnd (&strm) != Z_OK) { [decompressed release]; return nil; } // Set real length. [decompressed setLength: strm.total_out]; id newObject = [self initWithBytes:[decompressed bytes] length:[decompressed length]]; [decompressed release]; [gzippedData release]; return newObject; }

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • Help to choose NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >