Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 183/288 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • How do you explain to an "agile" team that they still need to plan the software they write?

    - by user23157
    This week at work I got agiled yet again. Having gone through the standard agile, TDD, shared ownership, ad hoc development methodology of never planning anything beyond a few user stories on a piece of card, verbally chewing the cud over the technicallities of a 3rd party integration ad nauseam without ever doing any real thinking or due dilligence and architecturally coupling all production code to the first test that comes into anyone's head for the past few months we reach the end of a release cycle and lo and behold the main externally visible feature that we have been developing is too slow to use, buggy, becoming labyrinthinly complex and completely inflexible. During this process "spikes" were done but never documented and not a single architectural design was ever produced (there was no FS, so what the hell eh, if you don't know what you are developing, how can you plan or research it?) - the project passed from pair to pair, each of whom only ever focused on a single user story at a time and well the result was inevitable. To resolve this I went off the radar, went (the dreaded) waterfall, planned, coded and basically didn't swap off the pair and tried as much as I could to work alone - focusing on solid architecture and specifications rather than unit tests which will come later once everything is pinned down. The code is now much better and is actually totally usable, flexible and fast. Certain people seem to have really resented me doing this and have gone out of their way to sabotage my efforts (possibly unconsciously) because it goes against the holy process of agile. So how do you, as a developer, explain to the team that it is not "un-agile" to plan their work, and how do you fit planning into the agile process? (I'm not talking about the IPM; I'm talking about sitting down with a problem and sketching out an end-to-end design that says how a problem should be solved in sufficient detail that anyone who works on the problem knows what architecture and patterns they should be using and where the new code should integrate into existing code)

    Read the article

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

  • Delaying a Foreach loop half a second

    - by Sigh-AniDe
    I have created a game that has a ghost that mimics the movement of the player after 10 seconds. The movements are stored in a list and i use a foreach loop to go through the commands. The ghost mimics the movements but it does the movements way too fast, in split second from spawn time it catches up to my current movement. How do i slow down the foreach so that it only does a command every half a second? I don't know how else to do it. Please help this is what i tried : The foreach runs inside the update method DateTime dt = DateTime.Now; foreach ( string commandDirection in ghostMovements ) { int mapX = ( int )( ghostPostition.X / scalingFactor ); int mapY = ( int )( ghostPostition.Y / scalingFactor ); // If the dt is the same as current time if ( dt == DateTime.Now ) { if ( commandDirection == "left" ) { switch ( ghostDirection ) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; dt.AddMilliseconds( 500 );// add half a second to dt break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; dt.AddMilliseconds( 500 ); break; } } } }

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Display Call To Action bar on page load [migrated]

    - by dasickle
    I am using the following code to load the bar on click but I can't figure our how to load it on page load automatically. <script> var autohide; $('body').prepend('<div id="bn-bar"><b>DON\'T MISS OUT!</b> Only 9 seats remain for the Google Tag Manager training on May 22! <a href="#">Book Your Seat Today!</a><div id="hider"> </div></div>'); $(document).ready(function(){ $("#hider").click(function(){ $("#bn-bar").animate({ top: "-50" }, "fast","linear", function(){}); }) $("#bn-bar").mouseover(function(){clearTimeout(autohide);}); setTimeout(function(){$("#bn-bar").animate({top: "0"}, "slow","linear", function(){});},2500); autohide = setTimeout(function(){$("#bn-bar").animate({top: "-30"}, "fast","linear", function(){});},10000); }) </script> Basically I am trying to load a the message when user enters my website and I will be inserting it via Google Tag Manager. Below is a page where I found the code: Creative Tag Manager – Ads, Promotions, and Visitor Messaging -Lunametrics

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • Ubuntu won't fit 10" netbook's native display

    - by Daniel
    I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is some bits of the system doesn't fit the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as I can see the Unity Launcher. My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, the desktop area is lowered, with bits of it hidden beneath the screen; & there's a black space left at the top of the screen. I had to revert to the old setting 1024x768 to push the desktop upwards and remove the black space.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Calculating 3d rotation around random axis

    - by mitim
    This is actually a solved problem, but I want to understand why my original method didn't work (hoping someone with more knowledge can explain). (Keep in mind, I've not very experienced in 3d programming, having only played with the very basic for a little bit...nor do I have a lot of mathematical experience in this area). I wanted to animate a point rotating around another point at a random axis, say a 45 degrees along the y axis (think of an electron around a nucleus). I know how to rotate using the transform matrix along the X, Y and Z axis, but not an arbitrary (45 degree) axis. Eventually after some research I found a suggestion: Rotate the point by -45 degrees around the Z so that it is aligned. Then rotate by some increment along the Y axis, then rotate it back +45 degrees for every frame tick. While this certainly worked, I felt that it seemed to be more work then needed (too many method calls, math, etc) and would probably be pretty slow at runtime with many points to deal with. I thought maybe it was possible to combine all the rotation matrixes involve into 1 rotation matrix and use that as a single operation. Something like: [ cos(-45) -sin(-45) 0] [ sin(-45) cos(-45) 0] rotate by -45 along Z [ 0 0 1] multiply by [ cos(2) 0 -sin(2)] [ 0 1 0 ] rotate by 2 degrees (my increment) along Y [ sin(2) 0 cos(2)] then multiply that result by (in that order) [ cos(45) -sin(45) 0] [ sin(45) cos(45) 0] rotate by 45 along Z [ 0 0 1] I get 1 mess of a matrix of numbers (since I was working with unknowns and 2 angles), but I felt like it should work. It did not and I found a solution on wiki using a different matirx, but that is something else. I'm not sure if maybe I made an error in multiplying, but my question is: this is actually a viable way to solve the problem, to take all the separate transformations, combine them via multiplying, then use that or not?

    Read the article

  • Are there plans for handwriting recognition?

    - by Patrick
    This is a big feature when it comes to putting Ubuntu onto tablets. Currently, Netbook edition works great for that purpose and the pen digitiser is perfect, but the handwriting would be a real dealmaker (especially for my business - we could actually move to Linux) to compete with the Windows one. CellWriter exists, but that only handles character and keyboard input (but I don't know about multitouch on the keyboard). It also needs to handle print and cursive, because character mode can be slow and uncomfortable (unless you're writing passwords). Lastly, CellWriter needs to have some default letter shapes rather than having to be trained from the start. There is a software package called MyScript (by Vision Objects) that handles all four modes (keyboard, character, print, cursive) plus calculator and fullscreen, but it's only free as a trial. Still, it would be nice to see it in the For Purchase section and the trial in the free section of the Software Centre. The only other ones are for Chinese/Japanese/Korean characters. What would really make a difference for us is the integration of some formal API with the OS that can automatically activate when running on a tablet to pass ink data to whatever recognition system is installed, and have something available (however rudimentary) to use it.

    Read the article

  • Problems with Maverick upgrade

    - by altenuta
    I upgraded to Maverick 10.10 from Lucid. I have an old Toshiba Satellite with a 1.1 MHz and 256MB RAM. Initially I couldn't get my wireless to work. That solved itself after installing various updates and programs. The problems that remain are: I have to authorize at least 2 times at start-up. This machine is Ubuntu only. No boot load screen. I have a ton of programs and system directories that are in my home folder. Is this normal? It is difficult to wake the computer from sleep. Usually I just shut it down and restart. Tonight I waited and got a message about corrupt memory. The computer takes forever to do just about everything. Slow to start programs or doing things on the web. I am a longtime Mac user (since 1986). I also manage a network of several windoze machines. I am definitely a GUI guy and do very little in the terminal so I really need to know where to begin to get things straightened out. Can I rescue this machine without wiping it and doing a fresh install? This is basically a hobby machine. Aside from all the programs and upgrades I've installed, I have almost no files or documents to worry about saving. Anyone have any ideas about the problems I'm having and the best way to proceed? Thanks, Al

    Read the article

  • Ubuntu 13.10 No Sound

    - by spiersie
    I was running 13.04 since last monday and just today i upgraded to 13.10, in both of these version i have not managed to get my sound working. I have gone into alsamixer and disabled auto mute and the volumes are up. However if somebody thinks they can help me fix this i will gladly follow any steps. Please lay specifically any terminal commands you need me to do to either show specs or solve the problem as i am not fluent with the linux commands, this desktop being my first system to run linux, starting last monday. blake@Blake-Ubuntu-PC:~$ lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Trinity HDMI Audio Controller Subsystem: ASUSTeK Computer Inc. Device 8526 Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at fef44000 (32-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:10.0 USB controller: Advanced Micro Devices, Inc. [AMD] FCH USB XHCI Controller (rev 03) (prog-if 30 [XHCI]) 00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD] FCH Azalia Controller (rev 01) Subsystem: ASUSTeK Computer Inc. Device 8445 Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at fef40000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 11)

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

  • How can i install ubuntu on my ntfs hdd without formatting?

    - by Ridvan Coban
    My hdd is just one partition in ntfs (500gb) and 430 gb is used by my photos/movies/music etc which i never will want to lose. Actually i installed ubuntu on a usb flash drive (using it right now) but it is too slow that way. But my problem is : My computer is damaged ( maybe chipset or but not sure) and none of the windows versions (xp,vista,7) works on my pc. I get blue screen error as soon as windows startup logo shows. But ubuntu just works flawless. That means i cannot use wubi. I wanted to shrink my hdd without losing data (which can be done in windows) but found nothing about that on ubuntu forums. Is this possible? Or install ubuntu on my ntfs filesystem? Note : I don't have chance to backup 400 gbs of data. Sorry for my question if it's written a bit compex. I hope you get the point and someone has an idea ;)

    Read the article

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • what is the best program to capture my workings as an AVI or MPEG

    - by raihanchy
    I have already used recordmydesktop, xvidcap and kazam. My sound working fine with other audio or videos. xvidcap doesn't record sound at all. I have tried many ways. If I try as: 'padsp xvidcap', it also gives error, like: /dev/dsp cannot found or missing. I have changed it to /dev/snd. Still no effect. Even I can record sound through gnome-sound-recorder - after pressing record button, I open pavucontrol. Then from Recording tab, I choose 'Monitor of Analog Stereo'. But if I run xvidcap, I don't get that option in pavucontrol. kazam works a bit slow. It records at the beginning of the captured video. But for unknown reason, it eventually the sounds just go off. Also the video is not smooth as xvidcap. Though Kazam output as H64/MP4. Record my Desktop also doesn't give sound. Can you guys please help me, either - how to get sound with xvidcap or how kazam could be record nicely. I am looking something Camtasia, as used for Windows. Thanks in advance. Raihan

    Read the article

  • Looking for someone to point me in the right direction. I want to learn how to use hosted servers

    - by Leisure
    TL;DR: I want a Java program to run on a server, I want the server to forward a particular port from external to internal IP, I want store a few files on the server. Guides please. So I made a hack job Java program that acts as a server for my android application. It stores data in text files and HTML files, uploads them via FTP to my webhost, and manages socket connections (using port forwarding) with any phones connected. Right now I'm running it on NetBeans on my home computer. I know that it will probably slow down or crash once about 50 phones are connected at once. Is there any way I can run this program on a server with a high bandwidth? Can someone please find me a guide for that? I'm noob and don't know where to start looking. I seriously don't know anything about renting or using servers - I need a nice guide, and recommendations. My requirements for the server: Can handle about 2k socket connections at once Can run my Java code and store my txt files Can give me a port and an IP address so TCP/IP clients can be connected My budget: $50 CAD per month. Please someone set my ship sailing in the right direction, I really don't know where to look for resources.

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the ever slow and buggy windoze OS once again. It's Time to change with the Ubuntu 10.10 64bit as a really faster Operating System. My Hard Disk laptop as a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make based on your answers * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 100GB (Primary)(sda3) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda4) * Linux root Ext3- 15,9GB (Extended)(sda5) (OpenSuse or Puppy) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) **Partition 1 does not end on cylinder boundary.(?)** * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195GB (Logical) * Linux root Ext4- 10GB (Extended) (sda5) * Linux home Ext3- 185GB (Extended) (sda6) I didn't know if I could wipe all previous partitions when i installed Ubuntu because of the RECOVERY partition so I just made the space for my extended partition by deleting the HP_TOOLS (Fat32). By doing this I managed to make and successfully install Ubuntu 64 but I couldn't actually make the partition for the swap or a third Linux OS. Question 1: What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • Creating a bootable flash without overlayfs

    - by Septagram
    I want to create an USB stick to carry my Ubuntu everywhere around with me. It's not intended to spread Ubuntu by installing it everywhere, but rather for running my configured system on any computer I come across. So far, I went with installing Ubuntu with unetbootin, however, I have some issues with this. When installed with netbootin, the original disk image is kept intact on the flash drive, forever. Also, a file is created for persistent storage and during boot it is accessed together with the image by overlayfs. This, in my opinion, has the following problems: If system is updated regularly, then files from the image are overwritten in persistent storage, doubling their size and wasting precious space. Persistent storage has a fixed size that you have to define from the start, again, wasting precious space. I'm not 100% sure, but maybe using overlayfs makes disk access slower, and more so on the relatively slow devices. So I'd like to find another solution: either to get rid of the original image or to install Ubuntu "normally" on the separate ext2 partition, or maybe even install it in the main vfat partition on the USB stick. Suggestions?

    Read the article

  • How do you get past the Analysis to Paralysis when working on a new project?

    - by Cape Cod Gunny
    I've been struggling with how to get my project going. I've got an old software package that is in need of desparate rewrite. I haven't compiled the source code since 2004. It still sells, it's stable but does require the “Run this program in compatibility mode for:” on a lot of the newer windows systems. It's also one of those hard coded 640 X 480 screen resolution programs. Yuck! I can't seem to get started with this rewrite. I'm constantly fiddling around with different things. I'll play around with different fluid layouts for a while. Then I start looking around at how the main menu should work/look. I quickly find out that there's this thing called "Cool Bars" and I'll spend hours playing with that. Then I start thinking about stuff like "Oh I need to make sure that the screen sizes are preserved so when the application gets relaunched it remebers how the screens were positioned." Which leads to what happens if they have two monitors? Which leads to what happens if they have a quad screen? Yikes it's got to stop. I have always been a slow starter. I think about stuff long and hard up front. This has always plagued me. Once I get my mind made up then bam... I'm off and running. I'm looking for advice from some other one-person software companies that can help someone like me get off to a quicker start?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >