Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 758/2365 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • CodePlex Daily Summary for Monday, October 14, 2013

    CodePlex Daily Summary for Monday, October 14, 2013Popular ReleasesAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Coevery - Free ASP.NET CRM: Coevery_WebApp: it just for publish to microsoft web app gellaryWsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.TerrariViewer: TerrariViewer v7 [Terraria Inventory Editor]: This is a complete overhaul but has the same core style. I hope you enjoy it. This version is compatible with 1.2.0.3 Please send issues to my Twitter or https://github.com/TJChap2840WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixe for new version look up.MoreTerra (Terraria World Viewer): MoreTerra 1.11.3: =========== =New Features= =========== New Markers added for Plantera's Bulb, Heart Fruits and Gold Cache. Markers now correctly display for the gems found in rock debris on the floor. =========== =Compatibility= =========== Fixed header changes found in Terraria 1.0.3.1Dynamics AX 2012 R2 Kitting: First Beta release of Kitting: First Beta release of Kitting Install by using XPO or Models.C# Intellisense for Notepad++: Release v1.0.8.0: - fixed document formatting artifacts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.0: - fixed document formatting artifacts To avoid the DLLs getting locked by OS use MSI file for the installation.Generic Unit of Work and Repositories Framework: v2.0: Async methods for Repostiories - Ivan (@ifarkas) OData Async - Ivan (@ifarkas) Glimpse MVC4 workig with MVC5 Glimpse EF6 Northwind.Repostiory Project (layer) best practices for extending the Repositories Northwind.Services Project (layer), best practices for implementing business facade Live Demo: http://longle.azurewebsites.net/Spa/Product#/list Documentation: http://blog.longle.net/2013/10/09/upgrading-to-async-with-entity-framework-mvc-odata-asyncentitysetcontroller-kendo-ui-gli...Media Companion: Media Companion MC3.581b: Fix in place for TVDB xml issue. New* Movie - General Preferences, allow saving of ignored 'The' or 'A' to end of movie title, stored in sorttitle field. * Movie - New Way for Cropping Posters. Fixed* Movie - Rename of folders/filename. caught error message. * Movie - Fixed Bug in Save Cropped image, only saving in Pre-Frodo format if Both model selected. * Movie - Fixed Cropped image didn't take zoomed ratio into effect. * Movie - Separated Folder Renaming and File Renaming fuctions durin...(Party) DJ Player: DJP.124.12: 124.12 (Feature implementation completed): Changed datatype of HistoryDateInfo from string to DateTime New: HistoryDateInfoConverter for the listbox Improved: HistoryDateInfoDeleter, HistoryDateInfoLoader, HistoryDateInfoAsynchronizer, HistoryItemLoader, HistoryItemsToHistoryDateInfoConverterSmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.0: HighlightsMulti-store support "Trusted Shops" plugins Highly improved SmartStore.biz Importer plugin Add custom HTML content to pages Performance optimization New FeaturesMulti-store-support: now multiple stores can be managed within a single application instance (e.g. for building different catalogs, brands, landing pages etc.) Added 3 new Trusted Shops plugins: Seal, Buyer Protection, Store Reviews Added Display as HTML Widget to CMS Topics (store owner now can add arbitrary HT...NuGet: NuGet 2.7.1: Released October 07, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.7.1 Important note: After downloading the signed build of NuGet.exe, if you perform an update using the "nuget.exe update -self" command, it will revert back to the unsigned build.New Projectsaigiongai: aigiongaiArtificial LIfe BEing - ALIBE: An Autonomous Arduino Robot based on a 4 wheel electric RC style vehicle equipped w/ various sensors and reactionary devices powered by Arduino Mega. BH Auto-injector: This ia a BH hack injector developed for r/SlashDiablo. For more information visit: http://www.reddit.com/r/slashdiablo BloxServer: A server for a game currently in DevBugNet Premium: BugNet Premium is extended version of open source defect tracking tool BugNetCASP Editor: CASP Editor hosted at SimlogicalGrace: Grace is an Dependency Injection Container as well as a couple other services like Data Transformation Service, Validation Service, and Reflection Service.Halogy v1.2 CE1.0: Halogy v1.2 Community Edition v1.0Hot Likes: Simple web Application Like Facebook and TwitterHyper-V Backup.NET: TestIndexedList: This library helps you to create lists with large amount of data and do high speed searches on your data. Kinect Skeleton Stream to BVH Converter: The program converts the skeleton data stream into BVH.Mama Goose's Birthday & Anniversary Calendar: This is a simple calendar that allows entering birthdays and anniversaries and to print the calendar for the year or month.MQL to FDK converter with migration toolkit: "MQL to FDK" project provides an easy and convenient way for conversion of MQL advisers to C# and launching them on FDK.Produzr: Produzr is a very simple CMS without a high learning curve.PSWFWeb - PowerShell Workflow WebConsole: A PSWorkflow job managersacm: naSave Cleaner: Sims 3 save cleanerSimple AES file appender: Simple AES file appenderSims 3 Package Viewer: Sims 3 ProjectsxBot Framework: xBot Framework is a dot Net framework for building a bot for use with Reddit.com.

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • [Oracle Identity Manager] 11g R2 Bundle Patch 09 is Available!

    - by mustafakaya
    Oracle Identity Manager Bundle Patch 09 is available now. You can download BP09 from here. Also,there is a important recommendation for BP08!  List of bugs fixed with BP09; Bug:12699224 : Trusted source reconciliation fails to create users with many reconciliation field mappings. Bug:14407437 : Provisioning through bulk request inserts null records into child tables. Bug:14493217 : Target resource reconciliation throws ORA-06512 error when the Descriptive field is mapped to a field that does not have a reconciliation field mapping. Bug:16044671 : User form customization fails if a UDF contains invalid character. Bug:16545968 : Modifying any attribute on a service account changes the account type as a primary account. Bug:16562633 : Oracle Identity Manager throws javax.el.elexceptions while viewing profile under direct report. Bug:16662834 : User not reprovisoned after user is deleted and created in the target with the same orclguid. Bug:16662905 : If an LOV field is required on an Application Instance form, no validation is enforced on the LOV field although it is required. Bug:16701873 : The Members tab of a role displays only enabled users and does not display disabled users. Bug:16862846 : When a notification is being sent, the mail ID in the Reply To field is set as the recipient's mail ID instead of the sender's mail ID. Bug:16824062 : When you use API to fetch or delete child data from an account, the child data row value is null. Therefore, child data is not returned. Bug:16912736 : There is a performance issue when the provisioned application instance details is opened for a user.

    Read the article

  • SQLite bulk insert on iPhone not working

    - by App_beginner
    Hi. I have been struggling with this seeminly easy problem for 48 hours, and I am no closer to a solution. So I was hoping that someone might be able to help me. I am building a app, that use a combination of a local (SQLite) database and an online database (PHP/MYSQL). The app is nearly finished. Checked for leaks and work like a charm. However the very last part is the part I have struggled with. On launch, I want the app to check for changes to the online databse, and if there is. I want it to download and parse a xml file containing the changes. Everything is working fine this far. But when I try to bulk insert my parsed data to my database, the app crashes, giving a NSInternalInconsistency error. Due to the database returning SQLITE_MISUSE. I have done a lot of googling, but am still unable to solve my problem. So I am putting the code here, hoping that someone can help me fix this. And I know that I should have used core data for this. But this is the very last part I am struggling with, and I am very reluctant to changing my entire code now. Core data will have to come in the update. Here is the error I recieve: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error while inserting data. 'library routine called out of sequence'' Here is my code: -(void)UpdateDatabase:(const char *)_query NewValues:(NSMutableArray *)_odb dbn:(NSString *)_dbn dbp:(NSString *)_dbp { sqlite3 *database; NSMutableArray *NewValues = _odb; int i; const char *query = _query; sqlite3_stmt *addStmt; for (i = 1; i < [NewValues count]; i++) { if(sqlite3_prepare_v2(database, query, -1, &addStmt, NULL) == SQLITE_OK) { sqlite3_bind_text(addStmt, 1, [[[NewValues objectAtIndex:i] name] UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(addStmt, 2, [[[NewValues objectAtIndex:i] city]UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_double(addStmt, 3, [[[NewValues objectAtIndex:i] lat] doubleValue]); sqlite3_bind_int(addStmt, 4, [[[NewValues objectAtIndex:i] long] doubleValue]); sqlite3_bind_int(addStmt, 5, [[[NewValues objectAtIndex:i] code] intValue]); } if(SQLITE_DONE != sqlite3_step(addStmt)) { NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); } //Reset the add statement. sqlite3_reset(addStmt); } }

    Read the article

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Would You Swim Laps in Lake Baikal?

    - by rickramsey
    source This is the lake where Yuli Vasiliev's countrymen swim laps. Yuli is one of my favorite OTN writers not just because he really knows his stuff. Not just because his writing is clear and accurate. And not just because his English is better than the English of most native speakers. Yo, those are all good reasons. But it's the Lake Baikal thing. Yuli recently wrote two wicked good how-to's about Oracle VM Templates. You should read them. You might gain a gram of Yuli's respect. Two grams, if you can head butt icebergs while you swim. How to Use Oracle VM Templates How to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. Also how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. How to Use Oracle VM VirtualBox Templates How to use Oracle VM VirtualBox Templates in Oracle VM VirtualBox. Similar to the article above, but it describes how to download, install, and configure the templates within Oracle VM VirtualBox, instead of on bare metal. Other OTN Technical Articles by Yuli Vasiliev Retrieving, Transforming, and Consolidating Web Data with Oracle Database Setting Up, Configuring, and Using a WebLogic Server Cluster Cube Development for Beginners How to XQuery Non-JDBC Sources from JDBC Advanced Dimensional Design with Oracle Warehouse Builder Using the JDBC Connectivity Layer in Oracle Warehouse Builder High Performance Oracle JDBC Programming Python Data Persistence with Oracle Querying JPA Entities with JPQL and Native SQL - Rick Website Newsletter Facebook Twitter

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

  • Shutdown issue on persistent LiveUSB

    - by John K
    A) Downloaded Ubuntu to a Windows 7 temp directory from: http://www.ubuntu.com/download/ubuntu/download (Ubuntu 11.10 - Latest version / 32-bit). The output was a .iso file. B) Created a bootable USB stick using Universal USB Installer: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/. Did not select "Persistent file". Result: Tried it in a Dell Latitude D630 Dell laptop: works every time, many startups and shutdowns. C) Repeated B) but with "Persistent file" set. Works once or twice, but then, just locks on the Ubuntu splash screen. E) Downloaded LiveUSB Install from http://live.learnfree.eu/download, which created a file on Windows 7 called live-usb-install-2.3.2.exe. F) Ran the above installer with "Persistent file" to a 4GB ScanDisk (formatted) thumb drive. Result: Worked pretty good for a while. Shutdown and rebooted several times. Changed items like creating a new directories - all worked. Then: Setup an Admin account with a password and no auto login. On next reboot, it required the password and logged in correctly. Tried to shutdown via the top left icon - Shutdown menu option. Key issue: Would not shutdown, but would always go back to the login prompt. Could successfully login. Finally, shutdown (which just put me back to the login window), and hard shutdown with the power switch. Result: On reboot, just locks on the Ubuntu splash screen. Questions (note, very new to Linux): Did I shutdown wrong (I mean prior to the hard power off)? Is the persistence option very unstable, or am I doing something else completely wrong?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Http handler for classic ASP application for introducing a layer between client and server

    - by JPReddy
    I've a huge classic ASP application where in thousands of users manage their company/business data. Currently this is not multi-user so that application users can create users and authorize them to access certain areas in the system. I'm thinking of writing a handler which will act as middle man between client and server and go through every request and find out who the user is and whether he is authorized to access the data he is trying to. For the moment ignore about the part how I'm going to check the authorization and all that stuff. Just want to know whether I can implement a ASP.net handler and use it as middle man for the requests coming for a asp website? I just want to read the url and see what is the page user is trying to access and what are the parameters he is passing in the url the posted data. Is this possible? I read that Asp.net handler cannot be used with asp website and I need to use isapi filter or extensions for that and that can be developed only c/c++. Can anybody through some light on this and guide me whether I'm in the right direction or not?

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Ensure Payroll Success with PeopleSoft Year-End Training for U.S. and Canada

    - by Breanne Cooley
    Year-end payroll processing and reporting is a requirement for your business. If you're responsible for completing these processes in either Canada or the United States using the PeopleSoft Payroll application, and if you're new to PeopleSoft Payroll or to performing these processes, consider enrolling in Oracle University's expert training. Our PeopleSoft Payroll specialists will guide you through the necessary steps to ensure you can smoothly and successfully perform your job. Training is specific to the country for which you are performing the processing and reporting. Training lasts one day and is delivered in our Live Virtual Class Format, which helps you avoid travel during this busy season. Here's the training we recommend: PeopleSoft Year-End Payroll - U.S. This course teaches you how to complete U.S. year-end processing and reporting using PeopleSoft Payroll for North America, step-by-step. Update tax reporting setup tables and update employees' income and tax records. Load each employee's year-end data into a single year-end record for processing and reporting.  Identify reports needed to reconcile the year-end data. Correct tax balances and other data as necessary. Generate final print and online W-2 forms and prepare the electronic file for the Social Security Administration.  Enter corrected W-2 information and print a W-2c form. Report periodic retirement distributions and related tax withholding amounts on form 1099-R.   Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. PeopleSoft Year-End Payroll – Canada This course covers the steps necessary to perform Canadian year-end processing using Oracle's PeopleSoft Payroll for North America. Explore adjustments, balances, year-end slip processing, common pitfalls and errors and balancing reports.  Produce accurate year-end reporting results such as T4, T4A, RL-1 and RL-2.  Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. See you in class! -Oracle University Marketing Team 

    Read the article

  • What should a domain object's validation cover?

    - by MarcoR88
    I'm trying to figure out how to do validation of domain objects that need external resources, such as data mappers/dao Firstly here's my code class User { const INVALID_ID = 1; const INVALID_NAME = 2; const INVALID_EMAIL = 4; int getID(); void setID(Int i); string getName(); void setName(String s); string getEmail(); void setEmail(String s); int getErrorsForInsert(); // returns a bitmask for INVALID_* constants int getErrorsForUpdate(); } My worries are about the uniqueness of the email, checking it would require the storage layer. Reading others' code seems that two solutions are equally accepted: both perform the unique validation in data mapper but some set an error state to the DO user.addError(User.INVALID_EMAIL) while others prefer to throw a totally different type of exception that covers only persistence, like: UserStorageException { const INVALID_EMAIL = 1; const INVALID_CITY = 2; } What are the pros and cons of these solutions?

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • Best design for a "Command Executer" class

    - by Justin984
    Sorry for the vague title, I couldn't think of a way to condense the question. I am building an application that will run as a background service and intermittently collect data about the system its running on. A second Android controller application will query the system over tcp/ip for statistics about the system. Currently, the background service has a tcp listener class that reads/writes bytes from a socket. When data is received, it raises an event to notify the service. The service takes the bytes, feeds them into a command parser to figure out what is being requested, and then passes the parsed command to a command executer class. When the service receives a "query statistics" command, it should return statistics over the tcp/ip connection. Currently, all of these classes are fully decoupled from each other. But in order for the command executer to return statistics, it will obviously need access to the socket somehow. For reasons I can't completely articulate, it feels wrong for the command executer to have a direct reference to the socket. I'm looking for strategies and/or design patterns I can use to return data over the socket while keeping the classes decoupled, if this is possible. Hopefully this makes sense, please let me know if I can include any info that would make the question easier to understand.

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • java multipart POST library

    - by tom
    Is there a multipart POST library out there that achieve the same effect of doing a POST from a html form? for example - upload a file programmingly in Java versus upload the file using a html form. And on the server side, it just blindly expect the request from client side to be a multipart POST request and parse out the data as appropriate. Has anyone tried this? specifically, I am trying to see if I can simulate the following with Java The user creates a blob by submitting an HTML form that includes one or more file input fields. Your app sets blobstoreService.createUploadUrl() as the destination (action) of this form, passing the function a URL path of a handler in your app. When the user submits the form, the user's browser uploads the specified files directly to the Blobstore. The Blobstore rewrites the user's request and stores the uploaded file data, replacing the uploaded file data with one or more corresponding blob keys, then passes the rewritten request to the handler at the URL path you provided to blobstoreService.createUploadUrl(). This handler can do additional processing based on the blob key. Finally, the handler must return a headers-only, redirect response (301, 302, or 303), typically a browser redirect to another page indicating the status of the blob upload. Set blobstoreService.createUploadUrl as the form action, passing the application path to load when the POST of the form is completed. <body> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> </body> Note that this is how the upload form would look if it were created as a JSP. The form must include a file upload field, and the form's enctype must be set to multipart/form-data. When the user submits the form, the POST is handled by the Blobstore API, which creates the blob. The API also creates an info record for the blob and stores the record in the datastore, and passes the rewritten request to your app on the given path as a blob key.

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >