Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 577/854 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

  • Which platform to choose from .Net or Java [closed]

    - by SahilMahajanMj
    I know that there is a lot healthy discussion about this topic. but my case is entirely different from these discussions. I have just passed out my graduation. During the graduation period, i had done C#.net and java as well. Now i am working as Android programmer in a company. In the recent times, i have heard a lot about pros and cons of these platforms. What our seniors used to suggest that, ASP.net is a lot better than JSP for website developement. .net provides us lot of in built tools and API's for building powerfull applications. Java's platform independancy has always been issue among them. Security fetures are more in java. .Net has more powerfull IDE's. Now i am a bit confused on which platform to choose from these. I found java better from two, so did i prefer to choose android. Also, the discussion always ends with an issue that java has no scope for job as far has .net does. I would like to hear suggestions on this topic, but it would be better, if you consider indian IT market for this discussion.

    Read the article

  • SDL_DisplayFormat works, but not SDL_DisplayFormatAlpha

    - by Bounderby
    The following code is intended to display a green square on a black background. It executes, but the green square does not show up. However, if I change SDL_DisplayFormatAlpha to SDL_DisplayFormat the square is rendered correctly. So what don't I understand? It seems to me that I am creating *surface with an alpha mask and I am using SDL_MapRGBA to map my green color, so it would be consistent to use SDL_DisplayFormatAlpha as well. (I removed error-checking for clarity, but none of the SDL API calls fail in this example.) #include <SDL.h> int main(int argc, const char *argv[]) { SDL_Init( SDL_INIT_EVERYTHING ); SDL_Surface *screen = SDL_SetVideoMode( 640, 480, 32, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_Surface *temp = SDL_CreateRGBSurface( SDL_HWSURFACE, 100, 100, 32, 0, 0, 0, ( SDL_BYTEORDER == SDL_BIG_ENDIAN ? 0x000000ff : 0xff000000 ) ); SDL_Surface *surface = SDL_DisplayFormatAlpha( temp ); SDL_FreeSurface( temp ); SDL_FillRect( surface, &surface->clip_rect, SDL_MapRGBA( screen->format, 0x00, 0xff, 0x00, 0xff ) ); SDL_Rect r; r.x = 50; r.y = 50; SDL_BlitSurface( surface, NULL, screen, &r ); SDL_Flip( screen ); SDL_Delay( 1000 ); SDL_Quit(); return 0; }

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • JEditorPane Code Completion

    - by Geertjan
    Code completion in a JEditorPane: Unfortunately, a lot of this solution depends on the Java Editor support in the IDE. Therefore, to use it, in its current state, you'll need lots of Java Editor related JARs even though your own application probably doesn't include a Java Editor. A key thing one needs to do is implement the NetBeans Code Completion API, using the related tutorial in the NetBeans Platform Learning Trail, but register the CompletionProvider as follows: @MimeRegistration(mimeType = "text/x-dialog-binding", service = CompletionProvider.class) Then in the TopComponent, include this code, which will bind all the completion providers in the above location, i.e., text/x-dialog-binding, to the JEditorPane: EditorKit kit = CloneableEditorSupport.getEditorKit("text/x-java"); jEditorPane1.setEditorKit(kit); FileObject fob; try {     fob = FileUtil.getConfigRoot().createData("tmp.java");     DataObject dob = DataObject.find(fob);     jEditorPane1.getDocument().putProperty(             Document.StreamDescriptionProperty,             dob);     DialogBinding.bindComponentToFile(fob, 0, 0, jEditorPane1);     jEditorPane1.setText("Egypt"); } catch (IOException ex) {     Exceptions.printStackTrace(ex); } Not a perfect solution, a bit hacky, with a high overheard, but a start nonetheless. Someone should look in the NetBeans sources to see how this actually works and then create a generic solution that is not tied to the Java Editor.

    Read the article

  • Find methods related to testcases in Java

    - by user3623718
    I want to automatically change some methods in the program. These methods contain some compiler error and my program aims to fix these compiler errors. After fixing compiler errors I need to run test cases related to the changed method (or class) to know it is correct and if not which test cases failed. As the programs under investigation are very big, I only need to run test cases related to changes. As an example, if I change one method, then I need to only run test cases related to this method. Therefore, what I need is to programmatically be able to find test cases related to each method, and class. It is also useful if there is a tool that can do that for me. As an example, a tool which creates a matrix shows each test case is related to which method(s) One easy way to do that is to run all test cases and save functions they accessed. However, the problem is at the beginning the input program contains compiler error and it is not possible to run test cases because of these compiler error. Please let me know what is the best way to do that. An API or a tool that I can be used programmatically is the best for me.

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • Change the Log Level of Node Manager.

    - by adejuanc
    This is useful to troubleshoot issues related to Node Manager, such as problems starting a Managed Server or reasons a server could be (re)started. To change the Log Level of Node Manager, you need to edit the nodemanager.properties file. This is usually located at: <MIDDLEWARE_HOME>/wlserver_10.3/common/nodemanager What you need to modify is property: ...LogLevel=INFO... Information about the appropriate values for this property is available in the Node Manager Documentation at 10.3 WebLogic Documentation (and in further releases) which states: LogLevel: Severity level of logging used for the Node Manager log. Node Manager uses the same logging levels as WebLogic Server. Default value: INFO However, this is incorrect. WLS has its own implementation of LogLevel, but Node Manager uses the standard Log Level from the java.util.logging.Level class. Therefore, the possible values for Node Manager LogLevel, in descending order are: SEVERE (highest value) WARNING INFO CONFIG FINE FINER FINEST (lowest value) The highest value provides only messages at the severe level. The warning level provides warning messages and severe messages, and so on. Besides those levels, ALL and OFF are also accepted. For example, if you only want Severe messages to be logged, select SEVERE. If you need the most detailed tracing available, select FINEST. For more information on what it will log at each level, please read the Java SE API for LoggingLevel.

    Read the article

  • How do I list all the relation variables and debug them interactively?

    - by mfisch
    I'm writing a charm that requires a mysql database, I found from looking at other charms that this (below) is how I get the info about the database: user=`relation-get user` password=`relation-get password` mysqlhost=`relation-get private-address` But I just found that from reading the wordpress charm example, is there a way to show all the relation variables that I can use? Also, while debugging my db-relation-changed script, I wanted to ssh into my host and interactively run those commands, for example relation-get user, but it didn't work. I resorted to having to restart everything and use juju log to print them out. This wasted a lot of time. Is there a way to print out these relations, either from my dev box or from the instance running my charm? (Below is what happens when I tried to interactively run relation-get): ubuntu@mfisch-local-tracks-0:~$ relation-get user usage: relation-get [-h] [-o OUTPUT] [-s SOCKET] [--client-id CLIENT_ID] [--format FORMAT] [--log-file FILE] [--log-level CRITICAL|DEBUG|INFO|ERROR|WARNING] [-r RELATION ID] [settings_name] [unit_name] No JUJU_AGENT_SOCKET/-s option found I tried juju debug-hooks tracks/0 -e local, that dropped me into a shell and relation-get still failed.

    Read the article

  • How to setup users for desktop app with SQL Azure as backend?

    - by Manuel
    I'm considering SQL Azure as DB for a new application I'm developing. The reason I want to go with Azure is because I don't want to have to maintain yet another database(s) and I want my users to be able to access the data from anywhere. The problem is that I'm not clear regarding how to users will connect. The application is a basic CRUD type of windows app. I've read that you need to add your IP to SQL Azure's firewall to connect to it, but I don't know if it's only for administration purposes only. Can anyone clarify if anyone (anywhere) can access the data with the proper credentials? Which of the following scenarios would work best (if at all)? A) Add each user to SQL Azure and have the app connect directly to Azure as if it was connecting to SQL Server B) Add an anonymous user SQL Azure and pass the real user's password/hash with every call so the Azure database can service the requests accordingly. C) Put a WCF service in between so that it handles the authentication stuff. The service will only serve the appropriate information to the user given his/her authentication and SQL Azure would be open to the service exclusively. D) - ideas are welcomed - This is confusing because all Azure examples I see are for websites. I have a hard time believing SQL Azure wouldn't handle the case of desktop apps connecting to it. So what's the best practice?

    Read the article

  • Structuring Access Control In Hierarchical Object Graph

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. Thoughts I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Build 2013&ndash;Keynote Thoughts

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/26/153243.aspxSome thoughts on the Build 2013 keynote. They Listened to Feedback while Keeping to their Plans I am one of the people in the “bring back the start menu” camp. I want my start menu. I *like* my start menu. Microsoft heard that and put it back, fantastic. But they implemented it in a way that still pushes the Windows 8 UI – and I’m actually pretty happy with it. When you hit the Start menu, you get the live-tiles displayed overlaying the desktop. But you can also swipe from the bottom to get the “all-applications” view. This, in essense, is really what those that like the Start Menu want. I believe it was mentioned that you can configure the all-applications view to be the default. They’re Committed to Improving Windows 8 The commitment to rapid deployments Ballmer talked about is crucial to Windows 8’s success. They need to keep it evolving quickly to maintain the interest of users and developers. I think the little improvements they showed are excellent (hands-free mode, multi window docking, better multi-monitor support, new developer controls, etc.). Hardware Vendors are Committed to Windows 8 They showed off a number of new hardware products (Windows 8 and Windows Phone). The Surface’s introduction to the market has done nothing to dissuade their hardware partners. Bing as a Platform is Huge for Developers!!! This was the biggest take-away from the keynote! What the team is doing with Bing not as a search engine but as a developer API is very impressive! I’m going to be diving into this over the rest of Build so watch more blog posts coming on it. Azure, Office 365, and other topics will be covered at tomorrow’s keynote. So far, great kick off to Build. Now on to sessions! D

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?

    - by Adel
    I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly "intelligent" For reference, the main parts of autonomic computing we'd like are the "self-configuring" and "self-healing" features (the other two - "self-optimizing" and "self-protecting", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too "weak" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks

    Read the article

  • Is IDirectInput8::FindDevice totally broken on Windows 7?

    - by Noora
    I'm developing on Windows 7, and using DirectInput8 for my input needs. I'm tracking gamepad additions and removals (that is, GUID_DEVINTERFACE_HID devices) using the DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE messages, which works fine. However, what I've come to find out is that no matter what I do, passing the received values from DBT_DEVICEARRIVAL to IDirectInput8's FindDevice method, it will always fail to identify the device, returning DIERR_DEVICENOTREG. DirectInput still clearly knows about the device, because I can enumerate and create it just fine. I've tried with three different gamepads, and the error persists, so it's not about that either. I also tried passing some alternative interface GUIDs for the RegisterDeviceNotification call, didn't help. So, has anyone else faced the same problem, and have you found a usable workaround? I'm afraid I'll soon have to stoop down to re-enumerating all devices when something is added or removed, but I'll first give this question one last shot here. EDIT: For the record, I've also tried pretty much every single HID API & SetupAPI function for alternative ways of figuring out the needed GUIDs, with zero success. So if you're facing the same problem as me, don't bother with that route. I'm pretty sure those GUIDs are made up by DirectInput itself somehow. Short of reverse engineering dinput8.dll, I'm truly out of ideas now.

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • How do I make apt-get install commands not display every package that is installing?

    - by rajlego
    When I install something on terminal, it often shows me a few things for status. For one, it shows download rate (which is fine). However, when I install something, it can display Unpacking libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Selecting previously unselected package slingshot-launcher. Preparing to unpack .../slingshot-launcher_0.7.6.1+r421+pkg32~ubuntu0.3.1_amd64.deb ... Unpacking slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Selecting previously unselected package contractor. Preparing to unpack .../contractor_0.3.1~r136+pkg22~ubuntu0.3.1_amd64.deb ... Unpacking contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Selecting previously unselected package apport-hooks-elementary. Preparing to unpack .../apport-hooks-elementary_0.1-0~35~saucy1_all.deb ... Unpacking apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Processing triggers for libglib2.0-0:amd64 (2.40.0-2) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libgranite-common (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Setting up contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Setting up apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for libc-bin (2.19-0ubuntu6) .. I would rather that not show up. I only want to see download rate, not all that other stuff. How do I do this? EDIT: I would also like the jargon to be stored somwehre else if something goes wrong, or for the jargon to just be expanable on terminal.

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • Dom U Installation on Ubuntu 11.10

    - by sridutt
    I am trying to add a DomU Operating system on Ubuntu 11.10. I have successfully installed Xen. Verified with xm info virsh-version which returns: Compiled against library: libvir 0.9.2 Using library: libvir 0.9.2 Using API: Xen 3.0.1 Running hypervisor: Xen 4.1. Now when I tried to install Dom0 it said: unable to connect to 'localhost:8000': , in VMM. So, I followed this bug link. I could now start adding DomU. When adding DomU, in last stage, it gives the following error: Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found")' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1899, in do_install guest.start_install(False, meter=meter) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1223, in start_install noboot) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1291, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1686, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found") I tried following this bug link that said, the bug is solved in the below package. When I run ./configure in this, I am getting an error: checking for LIBXML... no checking libxml2 xml2-config >= 2.6.0 ... configure: error: Could not find libxml2 anywhere (see config.log for details). What is the problem?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Mahout - Clustering - "naming" the cluster elements

    - by Mark Bramnik
    I'm doing some research and I'm playing with Apache Mahout 0.6 My purpose is to build a system which will name different categories of documents based on user input. The documents are not known in advance and I don't know also which categories do I have while collecting these documents. But I do know, that all the documents in the model should belong to one of the predefined categories. For example: Lets say I've collected a N documents, that belong to 3 different groups : Politics Madonna (pop-star) Science fiction I don't know what document belongs to what category, but I know that each one of my N documents belongs to one of those categories (e.g. there are no documents about, say basketball among these N docs) So, I came up with the following idea: Apply mahout clustering (for example k-mean with k=3 on these documents) This should divide the N documents to 3 groups. This should be kind of my model to learn with. I still don't know which document really belongs to which group, but at least the documents are clustered now by group Ask the user to find any document in the web that should be about 'Madonna' (I can't show to the user none of my N documents, its a restriction). Then I want to measure 'similarity' of this document and each one of 3 groups. I expect to see that the measurement for similarity between user_doc and documents in Madonna group in the model will be higher than the similarity between the user_doc and documents about politics. I've managed to produce the cluster of documents using 'Mahout in Action' book. But I don't understand how should I use Mahout to measure similarity between the 'ready' cluster group of document and one given document. I thought about rerunning the cluster with k=3 for N+1 documents with the same centroids (in terms of k-mean clustering) and see whether where the new document falls, but maybe there is any other way to do that? Is it possible to do with Mahout or my idea is conceptually wrong? (example in terms of Mahout API would be really good) Thanks a lot and sorry for a long question (couldn't describe it better) Any help is highly appreciated P.S. This is not a home-work project :)

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >