Search Results

Search found 57000 results on 2280 pages for 'alter database open'.

Page 649/2280 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Edit Contact code worked in 1.6 but doesn't work on Droid 2.1?

    - by user225405
    Hi All, I had some fairly simple code in my app to invoke Edit Contact activity on a known good contact index that worked in Android 1.6 but is broken for me now in Android 2.1 on the Droid. I built a sample activity/app 'EdCon' to show this: package com.jbh; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; public class EdCon extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Build an intent to edit a known good contact index Intent i; i = new Intent(Intent.ACTION_EDIT); i.setData(Uri.parse("content://contacts/people/10")); startActivity(i); } } When I run this on my G1 running 1.6 it works as expected i.e. brings up the Edit Contact screen for the known index and then I can hit BACK to return to "Hello World, EdCon". When I run this on the Droid under 2.1 I get the following: 05-07 15:35:57.787: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.jbh/.EdCon } 05-07 15:35:57.826: DEBUG/AndroidRuntime(13780): Shutting down VM 05-07 15:35:57.826: DEBUG/dalvikvm(13780): DestroyJavaVM waiting for non-daemon threads to exit 05-07 15:35:57.928: DEBUG/dalvikvm(13780): DestroyJavaVM shutting VM down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread shutting down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread has shut down 05-07 15:35:57.928: DEBUG/jdwp(13780): JDWP shutting down net... 05-07 15:35:57.928: DEBUG/jdwp(13780): Got wake-up signal, bailing out of select 05-07 15:35:57.928: INFO/dalvikvm(13780): Debugger has detached; object registry had 1 entries 05-07 15:35:57.928: DEBUG/dalvikvm(13780): VM cleaning up 05-07 15:35:57.935: INFO/ActivityManager(1013): Start proc com.jbh for activity com.jbh/.EdCon: pid=13802 uid=10052 gids={1015} 05-07 15:35:57.967: ERROR/AndroidRuntime(13780): ERROR: thread attach failed 05-07 15:35:58.053: INFO/ActivityThread(13792): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider 05-07 15:35:58.154: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 05-07 15:35:58.209: DEBUG/dalvikvm(13780): LinearAlloc 0x0 used 639500 of 5242880 (12%) 05-07 15:35:58.365: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=18) 05-07 15:35:58.639: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.EDIT dat=content://contacts/people/10 cmp=com.android.contacts/.ui.EditContactActivity } 05-07 15:35:58.975: DEBUG/dalvikvm(13137): GC freed 2902 objects / 166768 bytes in 61ms 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing local DB with package manager... 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.syncLocalDbWithPackageManager(): No INSTALLING or UNINSTALLING assets. 05-07 15:35:59.115: INFO/ActivityManager(1013): Displayed activity com.android.contacts/.ui.EditContactActivity: 387 ms (total 1296 ms) 05-07 15:35:59.185: DEBUG/Sources(13137): Creating external source for type=com.facebook.auth.login, packageName=com.facebook.katana 05-07 15:35:59.225: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing done. 05-07 15:35:59.232: WARN/dalvikvm(13137): threadid=27: thread exiting with uncaught exception (group=0x4001b180) 05-07 15:35:59.232: ERROR/AndroidRuntime(13137): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): java.lang.RuntimeException: An error occured while executing doInBackground() 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$3.done(AsyncTask.java:200) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.lang.Thread.run(Thread.java:1096) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): Caused by: android.database.sqlite.SQLiteException: no such column: raw_contact_id: , while compiling: SELECT account_name, account_type, sourceid, version, dirty, data_id, res_package, mimetype, data1, data2, data3, data4, data5, data6, data7, data8, data9, data10, data11, data12, data13, data14, data15, data_sync1, data_sync2, data_sync3, data_sync4, _id, is_primary, is_super_primary, data_version, group_sourceid, sync1, sync2, sync3, sync4, deleted, contact_id, starred, is_restricted FROM contact_entities_view WHERE (1) AND (raw_contact_id=10) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.native_compile(Native Method) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.compile(SQLiteProgram.java:110) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.(SQLiteProgram.java:59) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQuery.(SQLiteQuery.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1221) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQueryBuilder.query(SQLiteQueryBuilder.java:316) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3850) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3840) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2$RawContactsEntityIterator.(ContactsProvider2.java:4498) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.queryEntities(ContactsProvider2.java:4751) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProvider$Transport.queryEntities(ContentProvider.java:140) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProviderClient.queryEntities(ContentProviderClient.java:98) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentResolver.queryEntities(ContentResolver.java:296) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.model.EntitySet.fromQuery(EntitySet.java:72) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:191) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:154) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.util.WeakAsyncTask.doInBackground(WeakAsyncTask.java:45) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$2.call(AsyncTask.java:185) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): ... 4 more 05-07 15:35:59.303: INFO/Process(1013): Sending signal. PID: 13137 SIG: 3 05-07 15:35:59.303: INFO/dalvikvm(13137): threadid=7: reacting to signal 3 05-07 15:35:59.303: ERROR/dalvikvm(13137): Unable to open stack trace file '/data/anr/traces.txt': Permission denied 05-07 15:35:59.506: INFO/DumpStateReceiver(1013): Added state dump to 1 crashes 05-07 15:36:07.053: DEBUG/dalvikvm(12901): GC freed 389 objects / 25056 bytes in 145ms 05-07 15:36:17.287: DEBUG/dalvikvm(11649): GC freed 154 objects / 6816 bytes in 136ms 05-07 15:36:22.365: DEBUG/dalvikvm(13574): GC freed 348 objects / 67848 bytes in 112ms 05-07 15:36:27.451: DEBUG/dalvikvm(11836): GC freed 267 objects / 17432 bytes in 65ms 05-07 15:36:32.553: DEBUG/dalvikvm(12757): GC freed 1888 objects / 92440 bytes in 67ms 05-07 15:36:38.803: INFO/power(1013): * set_screen_state 0 05-07 15:36:38.813: DEBUG/SurfaceFlinger(1013): About to give-up screen, flinger = 0x114c30 05-07 15:36:38.826: DEBUG/Sensors(1013): using accelerometer (name=accelerometer) 05-07 15:36:38.834: DEBUG/PhoneWindow(13137): couldn't save which view has focus because the focused view android.widget.ScrollView@44883558 has no id. 05-07 15:36:38.865: DEBUG/WifiService(1013): ACTION_SCREEN_OFF 05-07 15:36:38.889: DEBUG/WifiService(1013): setting ACTION_DEVICE_IDLE timer for 900000ms 05-07 15:36:44.107: DEBUG/dalvikvm(1013): GC freed 7351 objects / 521440 bytes in 130ms 05-07 15:36:49.373: DEBUG/dalvikvm(13553): GC freed 321 objects / 12056 bytes in 102ms The no such column: raw_contact_id: looks like the issue but I'm not sure how or why that would happen or what it means. Any help appreciated! [email protected]

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Python - pickling fails for numpy.void objects

    - by I82Much
    >>> idmapfile = open("idmap", mode="w") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap") >>> unpickled = pickle.load(idmapfile) >>> unpickled == idMap False idMap[1] {1537: (552, 1, 1537, 17.793827056884766, 3), 1540: (4220, 1, 1540, 19.31205940246582, 3), 1544: (592, 1, 1544, 18.129131317138672, 3), 1675: (529, 1, 1675, 18.347782135009766, 3), 1550: (4048, 1, 1550, 19.31205940246582, 3), 1424: (1528, 1, 1424, 19.744396209716797, 3), 1681: (1265, 1, 1681, 19.596025466918945, 3), 1560: (3457, 1, 1560, 20.530569076538086, 3), 1690: (477, 1, 1690, 17.395542144775391, 3), 1691: (554, 1, 1691, 13.446117401123047, 3), 1436: (3010, 1, 1436, 19.596025466918945, 3), 1434: (3183, 1, 1434, 19.744396209716797, 3), 1441: (3570, 1, 1441, 20.589576721191406, 3), 1435: (476, 1, 1435, 19.640911102294922, 3), 1444: (527, 1, 1444, 17.98480224609375, 3), 1478: (1897, 1, 1478, 19.596025466918945, 3), 1575: (614, 1, 1575, 19.371648788452148, 3), 1586: (2189, 1, 1586, 19.31205940246582, 3), 1716: (3470, 1, 1716, 19.158674240112305, 3), 1590: (2278, 1, 1590, 19.596025466918945, 3), 1463: (991, 1, 1463, 19.31205940246582, 3), 1594: (1890, 1, 1594, 19.596025466918945, 3), 1467: (1087, 1, 1467, 19.31205940246582, 3), 1596: (3759, 1, 1596, 19.744396209716797, 3), 1602: (3011, 1, 1602, 20.530569076538086, 3), 1547: (490, 1, 1547, 17.994071960449219, 3), 1605: (658, 1, 1605, 19.31205940246582, 3), 1606: (1794, 1, 1606, 16.964881896972656, 3), 1719: (1826, 1, 1719, 19.596025466918945, 3), 1617: (583, 1, 1617, 11.894925117492676, 3), 1492: (3441, 1, 1492, 20.500667572021484, 3), 1622: (3215, 1, 1622, 19.31205940246582, 3), 1628: (2761, 1, 1628, 19.744396209716797, 3), 1502: (1563, 1, 1502, 19.596025466918945, 3), 1632: (1108, 1, 1632, 15.457141876220703, 3), 1468: (3779, 1, 1468, 19.596025466918945, 3), 1642: (3970, 1, 1642, 19.744396209716797, 3), 1518: (612, 1, 1518, 18.570245742797852, 3), 1647: (854, 1, 1647, 16.964881896972656, 3), 1650: (2099, 1, 1650, 20.439058303833008, 3), 1651: (540, 1, 1651, 18.552841186523438, 3), 1653: (613, 1, 1653, 19.237197875976563, 3), 1532: (537, 1, 1532, 18.885730743408203, 3)} >>> unpickled[1] {1537: (64880, 1638, 56700, -1.0808743559293829e+18, 152), 1540: (64904, 1638, 0, 0.0, 0), 1544: (54472, 1490, 0, 0.0, 0), 1675: (6464, 1509, 0, 0.0, 0), 1550: (43592, 1510, 0, 0.0, 0), 1424: (43616, 1510, 0, 0.0, 0), 1681: (0, 0, 0, 0.0, 0), 1560: (400, 152, 400, 2.1299736657737219e-43, 0), 1690: (408, 152, 408, 2.7201111331839077e+26, 34), 1435: (424, 152, 61512, 1.0122952080313192e-39, 0), 1436: (400, 152, 400, 20.250289916992188, 3), 1434: (424, 152, 62080, 1.0122952080313192e-39, 0), 1441: (400, 152, 400, 12.250144958496094, 3), 1691: (424, 152, 42608, 15.813941955566406, 3), 1444: (400, 152, 400, 19.625289916992187, 3), 1606: (424, 152, 42432, 5.2947192852601414e-22, 41), 1575: (400, 152, 400, 6.2537390010262572e-36, 0), 1586: (424, 152, 42488, 1.0122601755697111e-39, 0), 1716: (400, 152, 400, 6.2537390010262572e-36, 0), 1590: (424, 152, 64144, 1.0126357235581501e-39, 0), 1463: (400, 152, 400, 6.2537390010262572e-36, 0), 1594: (424, 152, 32672, 17.002994537353516, 3), 1467: (400, 152, 400, 19.750289916992187, 3), 1596: (424, 152, 7176, 1.0124003054161436e-39, 0), 1602: (400, 152, 400, 18.500289916992188, 3), 1547: (424, 152, 7000, 1.0124003054161436e-39, 0), 1605: (400, 152, 400, 20.500289916992188, 3), 1478: (424, 152, 42256, -6.0222748507426518e+30, 222), 1719: (400, 152, 400, 6.2537390010262572e-36, 0), 1617: (424, 152, 16472, 1.0124283313854301e-39, 0), 1492: (400, 152, 400, 6.2537390010262572e-36, 0), 1622: (424, 152, 35304, 1.0123190301052127e-39, 0), 1628: (400, 152, 400, 6.2537390010262572e-36, 0), 1502: (424, 152, 63152, 19.627988815307617, 3), 1632: (400, 152, 400, 19.375289916992188, 3), 1468: (424, 152, 38088, 1.0124213248931084e-39, 0), 1642: (400, 152, 400, 6.2537390010262572e-36, 0), 1518: (424, 152, 63896, 1.0127436235399031e-39, 0), 1647: (400, 152, 400, 6.2537390010262572e-36, 0), 1650: (424, 152, 53424, 16.752857208251953, 3), 1651: (400, 152, 400, 19.250289916992188, 3), 1653: (424, 152, 50624, 1.0126497365427934e-39, 0), 1532: (400, 152, 400, 6.2537390010262572e-36, 0)} The keys come out fine, the values are screwed up. I tried same thing loading file in binary mode; didn't fix the problem. Any idea what I'm doing wrong? Edit: Here's the code with binary. Note that the values are different in the unpickled object. >>> idmapfile = open("idmap", mode="wb") >>> pickle.dump(idMap, idmapfile) >>> idmapfile.close() >>> idmapfile = open("idmap", mode="rb") >>> unpickled = pickle.load(idmapfile) >>> unpickled==idMap False >>> unpickled[1] {1537: (12176, 2281, 56700, -1.0808743559293829e+18, 152), 1540: (0, 0, 15934, 2.7457842047810522e+26, 108), 1544: (400, 152, 400, 4.9518498821046956e+27, 53), 1675: (408, 152, 408, 2.7201111331839077e+26, 34), 1550: (456, 152, 456, -1.1349175514578289e+18, 152), 1424: (432, 152, 432, 4.5939047815653343e-40, 11), 1681: (408, 152, 408, 2.1299736657737219e-43, 0), 1560: (376, 152, 376, 2.1299736657737219e-43, 0), 1690: (376, 152, 376, 2.1299736657737219e-43, 0), 1435: (376, 152, 376, 2.1299736657737219e-43, 0), 1436: (376, 152, 376, 2.1299736657737219e-43, 0), 1434: (376, 152, 376, 2.1299736657737219e-43, 0), 1441: (376, 152, 376, 2.1299736657737219e-43, 0), 1691: (376, 152, 376, 2.1299736657737219e-43, 0), 1444: (376, 152, 376, 2.1299736657737219e-43, 0), 1606: (25784, 2281, 376, -3.2883343074537754e+26, 34), 1575: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1586: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1716: (24240, 2281, 376, -3.0093091599657311e-35, 26), 1590: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1463: (24240, 2281, 376, 2.1299736657737219e-43, 0), 1594: (24240, 2281, 376, -4123208450048.0, 196), 1467: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1596: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1602: (25784, 2281, 376, -5.9963281433905448e+26, 76), 1547: (25784, 2281, 376, -218106240.0, 139), 1605: (25784, 2281, 376, -3.7138649803377281e+27, 56), 1478: (376, 152, 376, 2.1299736657737219e-43, 0), 1719: (25784, 2281, 376, 2.1299736657737219e-43, 0), 1617: (25784, 2281, 376, -1.4411779941597184e+17, 237), 1492: (25784, 2281, 376, 2.8596493694487798e-30, 80), 1622: (25784, 2281, 376, 184686084096.0, 93), 1628: (1336, 152, 1336, 3.1691839245470052e+29, 179), 1502: (1272, 152, 1272, -5.2042207205116645e-17, 99), 1632: (1208, 152, 1208, 2.1299736657737219e-43, 0), 1468: (1144, 152, 1144, 2.1299736657737219e-43, 0), 1642: (1080, 152, 1080, 2.1299736657737219e-43, 0), 1518: (1016, 152, 1016, 4.0240902787680023e+35, 145), 1647: (952, 152, 952, -985172619034624.0, 237), 1650: (888, 152, 888, 12094787289088.0, 66), 1651: (824, 152, 824, 2.1299736657737219e-43, 0), 1653: (760, 152, 760, 0.00018310768064111471, 238), 1532: (696, 152, 696, 8.8978061885676389e+26, 125)} OK I've isolated the problem, but don't know why it's so. First, apparently what I'm pickling are not tuples (though they look like it), but instead numpy.void types. Here is a series to illustrate the problem. first = run0.detections[0] >>> first (1, 19, 1578, 82.637763977050781, 1) >>> type(first) <type 'numpy.void'> >>> firstTuple = tuple(first) >>> theFile = open("pickleTest", "w") >>> pickle.dump(first, theFile) >>> theTupleFile = open("pickleTupleTest", "w") >>> pickle.dump(firstTuple, theTupleFile) >>> theFile.close() >>> theTupleFile.close() >>> first (1, 19, 1578, 82.637763977050781, 1) >>> firstTuple (1, 19, 1578, 82.637764, 1) >>> theFile = open("pickleTest", "r") >>> theTupleFile = open("pickleTupleTest", "r") >>> unpickledTuple = pickle.load(theTupleFile) >>> unpickledVoid = pickle.load(theFile) >>> type(unpickledVoid) <type 'numpy.void'> >>> type(unpickledTuple) <type 'tuple'> >>> unpickledTuple (1, 19, 1578, 82.637764, 1) >>> unpickledTuple == firstTuple True >>> unpickledVoid == first False >>> unpickledVoid (7936, 1705, 56700, -1.0808743559293829e+18, 152) >>> first (1, 19, 1578, 82.637763977050781, 1)

    Read the article

  • delete row from mysql through generated table with records

    - by HennySmafter
    I am using 2 pages. On one page it generates a table with the records and a delete button. After pressing delete it goes to the second page which should delete the record. But it doesn't. Below is the code that I am using. PS: The code is adapted from a tutorial I found through Google a while ago. delete_overzicht.php <?php // Load Joomla! configuration file require_once('../../../configuration.php'); // Create a JConfig object $config = new JConfig(); // Get the required codes from the configuration file $server = $config->host; $username = $config->user; $password = $config->password; $database = $config->db; // Connect to db $con = mysqli_connect($server,$username,$password,$database); if (!$con){ die('Could not connect: ' . mysqli_error($con)); } mysqli_select_db($con,$database); // Get results $result = mysqli_query($con,"SELECT * FROM cypg8_overzicht"); echo "<table border='1' id='example' class='tablesorter'><thead><tr><th>Formulier Id</th><th>Domeinnaam</th><th>Bedrijfsnaam</th><th>Datum</th><th>Periode</th><th>Subtotaal</th><th>Dealernaam</th><th>Verwijderen</th></tr></thead><tbody>"; while($row = mysqli_fetch_array($result)) { echo "<tr>"; echo "<td>" . $row['formuliernummer'] . "</td>"; echo "<td>" . $row['domeinnaam'] . "</td>"; echo "<td>" . $row['bedrijfsnaam'] . "</td>"; echo "<td>" . $row['datum'] . "</td>"; echo "<td>" . $row['periode'] . "</td>"; echo "<td> &euro; " . $row['subtotaal'] . "</td>"; echo "<td>" . $row['dealercontactpersoon'] . "</td>"; echo "<td><a href='delete.php?id=" . $row['id'] . "'>Verwijderen </a></td>"; echo "</tr>"; } echo "</tbody></table>"; mysqli_close($con); ?> delete.php <?php // Load Joomla! configuration file require_once('../../../configuration.php'); // Create a JConfig object $config = new JConfig(); // Get the required codes from the configuration file $server = $config->host; $username = $config->user; $password = $config->password; $database = $config->db; // Connect to db $con = mysqli_connect($server,$username,$password,$database); if (!$con){ die('Could not connect: ' . mysqli_error($con)); } mysqli_select_db($con,$database); // Check whether the value for id is transmitted if (isset($_GET['id'])) { // Put the value in a separate variable $id = $_GET['id']; // Query the database for the details of the chosen id $result = mysqli_query($con,"DELETE * FROM cypg8_overzicht WHERE id = $id"); } else { die("No valid id specified!"); } ?> Thanks to everyone who is willing to help!

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Cannot connect to Amazon RDS

    - by Justin
    I have created an Amazon RDS database under the free tier (SQL Server Express, micro instance etc.), but I cannot connect to the server using Microsoft SQL Server Management Studio. I have configured the security group of the database instance (default) to accept my IP address. I am following the connection guide from amazon located here The error I receive is: Cannot connect to databaseName.c***rnqg***v.us-east-1.rds.amazonaws.com,1433. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) I am using Server type "Database Engine" and using SQL Server Authentication.

    Read the article

  • filestream restore very slow to DR server

    - by Jim
    We are backing up a database containing 100GB of filestream data. The backup takes under two hours to write. Restoring the same database to the DR environment is taking over forty hours. We don't have the same problem with other (non-filestream) databases, including some that are much larger. How do we try and get to the bottom of the problem. The database is in full recovery mode, and we are doing a full backup. Thanks.

    Read the article

  • How to resolve 0xc000007b error opening Adobe After Effects?

    - by user225170
    I have installed Adobe CS6 on Windows 8 and get the error "0xc000007b" every time I open open Adobe After Effects. All other Adobe software, including Photoshop and Premiere Pro, work perfectly. I have looked online extensively but have not found any solutions. What can I try/do to resolve this error? OS: Windows 8 (64 bit) - CPU: Intel Core i3-3110M 2,40GHz - RAM: 6GB DD3 - FX: AMD Radeon HD 7670M When I open Adobe After Effects CS6 with Dependency Walker, this message appears : "Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." What can I do to resolve this error?

    Read the article

  • How to copy path on a mac?

    - by AngryHacker
    Windows refugee here. On Windows you can easily copy the path and paste it elsewhere to get to the directory. Here is the situation on the Mac. I am in the Finder 20 folders down and I see the file I want. I go to my application and want to open it, so I pick Open Document from the File menu. However, it is exceedingly difficult and time-consuming to get to the place I want. Is there a way to copy the path in the finder and paste it in the File Open dialog of my application?

    Read the article

  • Cognos 8.3 and Oracle11g

    - by hiddenkirby
    When i place ojdbc5.jar in the c8\webapps\p2pd\WEB-INF\lib directory... i receive the error below. And yes the content store is on an oracle 11g RAC database. I am running Oracle11g Client. Cognos version is 8.3 SP4. [Content Manager database connection] 1. [ ERROR ] The database connection failed. 2. [ ERROR ] Content Manager failed to start because it could not load driver "oracle.jdbc.driver.OracleDriver". Any help would be greatly appreciated.

    Read the article

  • Can't Repair Mysql Table

    - by Pedro
    Hi, I have one table that I simply can't repair, I already try to remove the partitioning but still get this error: alter table promo_tool_view_44 REMOVE PARTITIONING; ERROR 1034 (HY000): Incorrect key file for table 'promo_tool_view_44'; try to repair it I already try to repair the table but I get this reply: repair table promo_tool_view_1; +-----------------------------+--------+----------+-----------------------------+ | Table | Op | Msg_type | Msg_text | +-----------------------------+--------+----------+-----------------------------+ | vad_stats.promo_tool_view_1 | repair | error | Partition p1 returned error | | vad_stats.promo_tool_view_1 | repair | error | Corrupt | +-----------------------------+--------+----------+-----------------------------+ 2 rows in set (0.21 sec) How can I solve this? Thanks, Pedro

    Read the article

  • DPM 2010 RC Mailbox Recovery Fails

    - by ITGuy24
    I am testing DPM 2010 with Exchange 2010. I am attempting to restore a single mailbox from a previous backup to a Recovery Database. I created and mounting the Recovery Database on the Exchange 2010 server and set the overwrite property. When I run the restore for the mailbox and point it to the recovery databse I get the following error. The recovery jobs for Exchange Mailbox Database MailboxDatabase01 that started at with the destination of EXCHANGE2010.domain.com, have completed. Most or all jobs failed to recover the requested data. (ID 3111) DPM encountered an error while performaing an operation for E:\DatabaseFiles\MailboxDatabase01.edb on EXCHANGE2010.domain.com (ID 2033 Details: The process cannot access the file because it is being used by another process (0x80070020)) MailboxDatabase01 is one of our MDBs and not the RDB I setup for the recovery. I am confused why it is even trying to access this as I have triple checked that the recovery is pointed to the RDB. Any idea what I am doing wrong?

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • Differences between FCKeditor and CKeditor?

    - by matt74tm
    Sorry, but I've not been able to locate anything (even on the official pages) informing me of the differences between the two versions. Even a query on their official forum has been unanswered for many months - http://cksource.com/forums/viewtopic.php?f=11&t=17986&start=0 The license page talks a bit about the "internal" differences http://ckeditor.com/license CKEditor Commercial License - CKSource Closed Distribution License - CDL ... This license offers a very flexible way to integrate CKEditor in your commercial application. These are the main advantages it offers over an Open Source license: * Modifications and enhancements doesn't need to be released under an Open Source license; * There is no need to distribute any Open Source license terms alongside with your product and no reference to it have to be done; * No references to CKEditor have to be done in any file distributed with your product; * The source code of CKEditor doesn’t have to be distributed alongside with your product; * You can remove any file from CKEditor when integrating it with your product.

    Read the article

  • Can I shorten my directory commands in ubuntu?

    - by Spencer Cooley
    When working on a rails app I like to open all of my files through the command line like so cd my_app gedit app/views/user/show.html.erb Is there a way that I could shorten this so that I could just write something like gedit user_views/show.html.erb ? I would like the console to stay in the main directory, I just don't like having to type out app/controller/user_controler.rb every time I want to open the user controller. I know that I could just open the file with my mouse, but I feel like moving from keyboard to mouse breaks my focus a little bit. When I can just tap away at the keyboard it seems like I have a more smooth workflow.

    Read the article

  • Can I shorten my directory commands in ubuntu?

    - by Spencer Cooley
    When working on a rails app I like to open all of my files through the command line like so cd my_app gedit app/views/user/show.html.erb Is there a way that I could shorten this so that I could just write something like gedit user_views/show.html.erb ? I would like the console to stay in the main directory, I just don't like having to type out app/controller/user_controler.rb every time I want to open the user controller. I know that I could just open the file with my mouse, but I feel like moving from keyboard to mouse breaks my focus a little bit. When I can just tap away at the keyboard it seems like I have a more smooth workflow.

    Read the article

  • Cognos 8.3 and Oracle11g

    - by hiddenkirby
    When i place ojdbc5.jar in the c8\webapps\p2pd\WEB-INF\lib directory... i receive the error below. And yes the content store is on an oracle 11g RAC database. I am running Oracle11g Client. Cognos version is 8.3 SP4. [Content Manager database connection] 1. [ ERROR ] The database connection failed. 2. [ ERROR ] Content Manager failed to start because it could not load driver "oracle.jdbc.driver.OracleDriver". Any help would be greatly appreciated.

    Read the article

  • Vmware Player Network adaptors have no network or internet access in windows 7 enterprise

    - by daffers
    As per the title. My VMWare player installation has setup the two network adaptor VMnet1 and VMnet8 and they are picked up as unidentified networks with no network access (i need this to activate my windows server installation on it). The option to change the network location is not available (this might be because of network policy on the domain despite having set this as configurable in the local security policy section). Is there anyway i can change how these networks are detected or alter the configuration of vmware to get around this?

    Read the article

  • How to create tunnel to utilize for telnet connection.

    - by Z12
    The scenario is as follows: Machine A is located behind client firewall. The machine runs telnetd. This is Linux machine with Python 2.5.4 installed. I do not know the IP addy of the router and firewall is not open incoming. outgoing firewall is open. Machine B (Windows machine) is a server with well known IP address. I can install any programs I want on either machine. The idea is that I want Machine A to open a socket to machine B. Then I want to hold that socket and use to run a telnet session from Machine B to Machine A telnetd server. Is there any freeware that does this? Thoughts? Thanks!

    Read the article

  • MySQL Federated Tables Escaped Table Names

    - by Gordon
    I am trying to use MySQL federated tables. The problem is that the documentation specified at http://dev.mysql.com/doc/refman/5.0/en/federated-use.html says that a federated table should be created using the following format for the CONNECTION parameter: scheme://user_name[:password]@host_name[:port_num]/db_name/tbl_name E.G. CONNECTION='mysql://username:password@hostname:port/database/tablename' CONNECTION='mysql://username@hostname/database/tablename' CONNECTION='mysql://username:password@hostname/database/tablename' The problem is that the table I am trying to connect to has non-standard characters in it and I cannot find the proper way to scape them in the connections tring. For example, a table named `Table (one)` . Which has the space and the parenthesis, requiring backticks surrounding it inside any SQL code. Anyone know the proper way to do this?

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

  • How do I remove a repository of yum

    - by sunil
    When I search for a package in yum(centos 6), it tries to search in a repro named 'c6-media' And it gives a bunch of errors as follows file:///media/CentOS/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/CentOS/repodata/repomd.xml Trying other mirror. file:///media/cdrecorder/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrecorder/repodata/repomd.xml Trying other mirror. file:///media/cdrom/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrom/repodata/repomd.xml Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: c6-media. Please verify its path and try again Obviously the error seems to say that yum is trying to search for the CD/DVD which installed the OS. I do not have it now. All I want to do now is to delete this repository from yum. I went to the package manager graphical tool and removed this from the sources. Seems yum and the graphical tool do not use the same config. This is just my guess.

    Read the article

  • Saving a modified InfoPath form to its form library

    - by Nathan Lykken
    Within our corporate SharePoint 2007 site, there is a particular form library that contains 10 separate files. 9 of these are either Excel, Word, or PowerPoint files and one of these is an InfoPath 2007 form that serves as a report. After noticing an error within this InfoPath form, I saved this InfoPath form to my local directory and then, within the design mode of InfoPath, I modified this InfoPath form. What is the proper way to save this modified InfoPath form to its form library? Everything that I have tried results in nobody except myself having access to this modified InfoPath form. I can open this InfoPath form without error but when my coworkers try to open this InfoPath form on their machines, they receive this error: “The form cannot be opened because it requires the domain permission level and it currently has restricted permission. To fix this problem, open the form from the location it was published to."

    Read the article

  • Is there any online web application for daily activity logs [closed]

    - by user25
    I am looking for daily calendar-like apps, but ones that have some text editor attached for every day. All I want to do is that when I open that page, it should open the text editor for that day and I could enter some data of what I have accomplished, then save it. The next day I'd like it so when I open the application, then I can go to new page, but I have all of the previous day's data to have a look at what I have done. Something like Google calendar, but I need an associated text editor to paste some notes, stuff, etc. on a daily basis with logs.

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >