Search Results

Search found 30301 results on 1213 pages for 'content db'.

Page 166/1213 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • HTML5 drag & drop: The dragged element content is missing in Webkit browsers.

    - by Cibernox
    I'm trying to implement something similar to a cart where you can drop items from a list. This items (<li> elements) has some elements inside (divs, span, and that stuff). The drag and drop itself works great. But the dragged element's image doesn't show its content in Webkit browsers. My list element has a border an a background color. In Firefox, the image is the whole item. In Webkit browsers, only the dragged element without content. I see the background and border, but without text inside. I tried to make a copy of the element and force it to be the image, but doesn't work. var dt = ev.originalEvent.dataTransfer; dt.setDragImage( $(ev.target).clone()[0], 0, 0); I have a simplified example that exhibit the same behavior: http://jsfiddle.net/ksnJf/1/

    Read the article

  • Most efficient way to Update with Linq2Sql

    - by pranay
    can I update my employee record as given in below function or i have to make query of employee collection first and than i update data public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); db.App3_EMPLOYEEs.Attach(employee); db.SubmitChanges(); return employee.PKEY; } or i have to do this public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); App3_EMPLOYEE emp = db.App3_EMPLOYEEs.Single(e => e.PKEY == employee.PKEY); db.App3_EMPLOYEEs.Attach(employee,emp); db.SubmitChanges(); return employee.PKEY; } But i dont want to use second option is there any efficient way to update data

    Read the article

  • Why is it inserting 0's instead of blank spaces into my DB using php?

    - by zeckdude
    I have an insert: $sql = 'INSERT into orders SET fax_int_prefix = "'.$_SESSION['fax_int_prefix'].'", fax_prefix = "'.$_SESSION['fax_prefix'].'", fax_first = "'.$_SESSION['fax_first'].'", fax_last = "'.$_SESSION['fax_last']; The value of all of these fields is that they are blank right before the insert. Here is an example of one of them I echo'd out just before the insert: $_SESSION[fax_prefix] = For some reason it inserts the integer 0, instead of a blank value or null, as it should. Why is it inserting 0's instead of blank spaces into my DB?

    Read the article

  • In Rails models; for symbols get automatically converted to YAML when saving to DB. What is the corr

    - by Ram
    In my model example Game, has a status column. But I usually set status by using symbols. Example self.status = :active MATCH_STATUS = { :betting_on => "Betting is on", :home_team_won => "Home team has won", :visiting_team_won => "Visiting team has one", :game_tie => "Game is tied" }.freeze def viewable_status MATCH_STATUS[self.status] end I use the above Map to switch between viewable status and viceversa. However when the data gets saved to db, ActiveRecord appends "--- " to each status. So when I retrieve back the status is screwed. What should be the correct approach?

    Read the article

  • php mysqli help, first line in DB not being returned?

    - by williamsongibson
    Here is my code <?php require_once 'connect.php'; $sql = "SELECT * FROM `db-pages`"; $result = $mysqli->query($sql) or die($mysqli->error.__LINE__); while ($row = $result->fetch_assoc()) { echo($row['pagetitle'].' - To edit this page <a href="editpage.php?id='.$row['id'].'">click here</a><br>'); } } ?> I've added a couple more rows to the Database and it's returning them all, apart from id=1 in the DB. Any idea why?

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • Why notify listeners in a content provider query method?

    - by cbrulak
    Vegeolla has this blog post about content providers and the snippet below (at the bottom) with this line: cursor.setNotificationUri(getContext().getContentResolver(), uri); I'm curious as to why one would want to notify listeners about a query operation. Am I missing something? Thanks @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { // Uisng SQLiteQueryBuilder instead of query() method SQLiteQueryBuilder queryBuilder = new SQLiteQueryBuilder(); // Check if the caller has requested a column which does not exists checkColumns(projection); // Set the table queryBuilder.setTables(TodoTable.TABLE_TODO); int uriType = sURIMatcher.match(uri); switch (uriType) { case TODOS: break; case TODO_ID: // Adding the ID to the original query queryBuilder.appendWhere(TodoTable.COLUMN_ID + "=" + uri.getLastPathSegment()); break; default: throw new IllegalArgumentException("Unknown URI: " + uri); } SQLiteDatabase db = database.getWritableDatabase(); Cursor cursor = queryBuilder.query(db, projection, selection, selectionArgs, null, null, sortOrder); // Make sure that potential listeners are getting notified cursor.setNotificationUri(getContext().getContentResolver(), uri); return cursor; }

    Read the article

  • SharePoint 2007 / 2010 Content Indexing &ldquo;The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.&rdquo;

    - by Stacy Vicknair
    If you have large files in a content source that is being indexed by Sharepoint you might run into the following error message: “The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.” This is usually caused because SharePoint’s MaxDownloadSize setting is set lower than the size of the file you are attempting to index. You can increase this value, restart the service then kick off a full crawl in order to fix this issue, but SharePoint 2007 and 2010 have different methods for accomplishing this task.   Sharepoint 2007 Open up the Registry editor and increase the MaxDownloadSize value to a number (in MB) higher than the largest file being indexed. You can find this at: HKEY_LOCAL_MACHINE\Software\Microsoft\Search\1.0\Gathering Manager After you increase the size, cycle the search service and kick off a full crawl of the content source in question.   Sharepoint 2010 With SharePoint 2010 you can use PowerShell via the Sharepoint 2010 Console in order to change the MaxDownloadSize. Execute the following commands to update the value: 1: $ssa = Get-SPEnterpriseSearchServiceApplication 2: $ssa.SetProperty(“MaxDownloadSize”, <new size in MB>) 3: $ssa.Update()   References: http://support.microsoft.com/kb/287231 http://blogs.technet.com/b/brent/archive/2010/07/19/sharepoint-server-2010-maxdownloadsize-and-maxgrowfactor.aspx   Technorati Tags: SharePoint,WSS,MaxDownloadSize,Search

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Why would the 'show processlist' command speed up normally slow requests to my remote DB? (connected via VPN)

    - by Hakan B.
    I am running a local Django development server that connects to a remote MySQL server via a VPN (IPSec). Request times are awfully slow and I consistently see timeouts. Attempting to diagnose the problem, I logged in to the remote database and ran: show full processlist Immediately, the local server went from idle to working. The page had not yet completely loaded, but progress had been made (debug logs confirm this). When I ran 'show full processlist' several times more in succession, the request completed quickly. I can currently reproduce this - unless I run 'show full processlist' over and over on the remote server, my local request usually times out. Does anyone have any idea why this would happen? I'm running Django 1.3 and OS X 10.7. Note: I realize this may be entirely not be a question with a clear-cut answer and is probably my fault, but it is odd and reproducable, so I hope someone can at least point me the right direction. Thanks in advance.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Access Java based keystore directly on Sun ONE Webserver 6.1

    - by George Bailey
    The keystore seems to reside in one of /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db What tool would I use to access this file? I have tried these commands which did not work. /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db They all gave me the error message keytool error: java.io.IOException: Invalid keystore format

    Read the article

  • Why do I often have to refresh pages I navigate to once for them (or content in them) to load?

    - by GetOutOfBox
    I have noticed a bizarre pattern when using my PC, that when I open a link to a website, it often will often take a very long time to load, or time out. Sometimes content on the website will be drawn, but again, it seems to get "stuck" for an unusual amount of time before finishing. Most affected is Youtube; almost every time I navigate to a youtube video from another website such as Google, the video will not begin playing, but will instead just display the player controls with a black screen where the video should be and the buffering symbol, usually before displaying an error such as "The video failed to load". The unusual part of this problem is that whenever this happens, refreshing the page always causes it to load almost immediately the second time around, without any problems. Note that I'm not talking about how some browsers will dump whatever has been cached to the "pallet" briefly when the page is refreshed or loading stopped; but that the second time loading the website being faster. I have done my best to rule out some of the obvious causes: My Windows 7 desktop computer is the only device that seems to be affected. I use Firefox on it (latest version, flash updated, etc). My connection has more than enough bandwidth (30 megabits down, 4 up), and I've even tried QoSing all other devices to make sure this isn't happening due to usage spikes. Wireshark is not showing any clearly unusual network activity (i.e frequently dropped packets).

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • I want to install and get to building a personal MySQL DB on 64 bit Ubuntu [closed]

    - by Ari Hall
    So how do I go from installing MySQL from the Software Center to inputing data into fields and bringing in a comma delimited file? I've only had brief experience with MSAccess and OOo Base a long time ago, so details are appreciated, I just want to get up and running. I have Ubuntu 10.10, 64 bit, if that affects much. If you can link me to a howto that does exactly what I'm looking for, that would work. Again, Thanks!

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • Wordpress redirects to itself endlessly

    - by iTayb
    I've just upgraded to last version (2.9.1) from kinda late version (2.2.1). After the upgrade I've realized that you cannot access wordpress from my .com domain, however it is possible via other subdomain! db-he.110mb.com works fine while http://www.db-he.com doesn't. That both redirect to the same server, and configurations are fine. However you cannot surf index.php (which is wordpress'). www.db-he.com/index.php is doing a permanent redirect to www.db-he.com/index.php for some reason. Problem is with wordpress only. All other files works fine. For example, changes.txt can be accessed from both links: www.db-he.com/changes.txt db-he.110mb.com/changes.txt For some reason, it seems more a server problem than a wordpress problem. What can I do?

    Read the article

  • How do I convert a Mac OS Filemaker 2 database to a recent FM or Bento db, preserving the relations

    - by willc2
    I'm hoping for more than just exporting the data, I would like to preserve the relation between the databases. This is for a friend's legacy database that tracks monthly fees from a list of clients. I have the original FM database file on hand, but not the machine it ran on with the old version of Filemaker 2. Recent versions won't import it, saying it's too old. If there is a Mac-only solution that would make things simpler for me.

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • How do I connect to my InterBase db on a Windows Server from Delphi XE?

    - by chris
    I have installed InterBase on a Windows Server 2008, added a database through IBConsole and I'm trying to set up a connection to it from Delphi XE. I've clicked on the DataExplorer tab and right clicked "INTERBASE/Add New Connection". This is where I am having trouble setting it up correctly. Anyone know this stuff? I'm waaaay out of my comfort zone here but I'm helping some people out and I'm trying to get this to work myself so that I hopefully can fix the problems they are having.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >