Search Results

Search found 19768 results on 791 pages for 'hardware programming'.

Page 707/791 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • onTouchListener togglebutton, ignores first press?

    - by Paul
    I have a togglebutton that should run code when I press it down and more code when I let go. However the first time I press and let go nothing happens. Every other time it is fine, why is this? I can see the method only runs the first time when I let go of the button (it does not trigger any onTouch part of the method though), how can I get around this, and have it work for the first press? public void pushtotalk3(final View view) { ((ToggleButton) view).setChecked(true); ((ToggleButton) view).setChecked(false); view.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { //if more than one call, change this code int callId = 0; for (SipCallSession callInfo : callsInfo) { callId = callInfo.getCallId(); Log.e(TAG, "" + callInfo.getCallId()); } final int id = callId; switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { //press ((ToggleButton) view).setBackgroundResource(R.drawable.btn_blue_glossy); ((ToggleButton) view).setChecked(true); OnDtmf(id, 17, 10); OnDtmf(id, 16, 9); return true; } case MotionEvent.ACTION_UP: { //release ((ToggleButton) view).setBackgroundResource(R.drawable.btn_lightblue_glossy); ((ToggleButton) view).setChecked(false); OnDtmf(id, 18, 11); OnDtmf(id, 18, 11); return true; } default: return false; } } }); } EDIT: the xml for the button: <ToggleButton android:id="@+id/PTT_button5" android:layout_width="0dp" android:layout_height="fill_parent" android:text="@string/ptt5" android:onClick="pushtotalk5" android:layout_weight="50" android:textOn="Push To Talk On" android:textOff="Push To Talk Off" android:background="@drawable/btn_lightblue_glossy" android:textColor="@android:color/white" android:textSize="15sp" /> EDIT: hardware problem, can't test solutions atm.

    Read the article

  • How to debug into my apache module built using a Makefile?

    - by AJ
    Firstly, I come from Windows-VisualStudio-C++ background. Now I am developing in a Ubuntu environment. With the help of a Makefile, I built a mymodule.so and copied it to the modules folder within apache. Now, it appears that the module is working fine. But I would like to debug into this module to understand it better. So, first, is there any way I can get something similar to the Visual Studio debugger type of feel while debugging this module? Now, i read that i can use gdb to debug into apache modules, can somebody tell me in detail how this is done or point me to some resource that does it. Ideally, i would like to single step and stuff. I am trying Code::Blocks IDE which has some debugging support. Using the IDE and custom make file, I build the module. Copied it to module location, but how do i debug. How do i hook to the apache process. Should I use Attach to Process. I tried that with the pid of httpd, but with no success. Also, while building is there some flag that i should set so that the .so file is debuggable? I am pretty basic with Linux because i come from windows programming background. Kindly suggest how I go about this. Thanks in advance, Arjun

    Read the article

  • Why is this exception thrown in the visual studio C compiler?

    - by Shane Larson
    Hello. I am trying to get more adept and my C programming and I was attempting to test out displaying a character from the input stream while inside of the loop that is getting the character. I am using the getchar() method. I am getting an exception thrown at the time that the printf statement in my code is present. (If I comment out the printf line in this function, the exception is not thrown). Exception: Unhandled exception at 0x611c91ad (msvcr90d.dll) in firstOS.exe: 0xC0000005: Access violation reading location 0x00002573. Here is the code... Any thoughts? Thank you. PS. I am using the stdio.h library. /*getCommandPromptNew - obtains a string command prompt.*/ void getCommandPromptNew(char s[], int lim){ int i, c; for(i=0; i < lim-1 && (c=getchar())!=EOF && c!='\n'; ++i){ s[i] = c; printf('%s', c); } }

    Read the article

  • Is the Unicode prefix N still needed in SQL Compact Edition?

    - by Dave
    At least in previous versions of SQL Server, you had to prefix Unicode string constants with an "N" to make them be treated as Unicode. Thus, select foo from bar where fizz = N'buzz' (See "Server-Side Programming with Unicode" for SQL Server 2005 "from the horse's mouth" documentation.) We have an application that is using SQL Compact Edition and I am wondering if that is still necessary. From the testing I am doing, it appears to be unneeded. That is, the following SQL statements both behave identically in SQL CE, but the second one fails in SQL Server 2005: select foo from bar where foo=N'???' select foo from bar where foo='???' (I hope I'm not swearing in some language I don't know about...) I'm wondering if that is because all strings are treated as Unicode in SQL CE, or if perhaps the default code page is now Unicode-aware. If anyone has seen any official documentation, either yea or nay, I'd appreciate it. I know I could go the safe route and just add the "N"'s, but there's a lot of code that will need changed, and if I don't need to, I don't want to! Thanks for your help!

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • What's wrong with this jQuery fading gallery code?

    - by Meep3D
    So I am creating a fading gallery and am a bit of a noob to javascript (but not to programming). I have no idea what is wrong though. Here's the function in question: /* */ function show_next () { // Hide current $('.s_gallery_images li.s_current').fadeTo(200, .2); $('.s_gallery_images li.s_current').css("border","1px green solid"); // if ($('.s_gallery_images li').hasClass ('.s_current')) { console.log ('Incrementing existing'); // Class already exists $('.s_current:first').removeClass('s_current').next().addClass('s_current'); // Was that the last one? if ($('.s_gallery_images li').hasClass ('.s_current')) { console.log ('Current found'); } else { // Class doesn't exist - add to first $('.s_gallery_images li:first').addClass ('.s_current'); console.log ('Wrapping'); } } else { console.log ('Adding new class'); // Class doesn't exist - add to first $('.s_gallery_images li:first').addClass ('.s_current'); } // Show new marked item $('.s_gallery_images li.s_current').fadeTo(200, .8); } The HTML is a very simple: <ul class="s_gallery_images"> <li><img src="imagename" alt="alt" /></li> <li><img src="imagename" alt="alt" /></li> <li><img src="imagename" alt="alt" /></li> </ul> And it displays the list and the images fine. I am using firebugs console.log for debugging, plus have a class set for s_current (bright border) but nothing happens at all. The firebug console log says: Adding New Class Incrementing Existing Current Found Incrementing Existing Current Found Incrementing Existing Current Found Incrementing Existing Current Found ... to infinity The function is called on a setInterval timer, and as far as I can tell it should be working (and I have done something similar before), but it just isn't happening :(

    Read the article

  • How can I work around WinXP using ports 1025-5000 as ephemeral?

    - by Chris Dolan
    If you create a TCP client socket with port 0 instead of a non-zero port, then the operating system chooses any free ephemeral port for you. Most OSes choose ephemeral ports from the IANA dynamic port range of 49152-65535. However in Windows Server 2003 and earlier (including XP) Microsoft used ports 1025-5000 as the ephemeral range, according to their bind() documentation. I run multiple Java services on the same hardware. On rare occasions, this range collides with well-known ports that I use for other services (e.g. port 4160 for Jini discovery). While rare, this has caused real problems. Is there any easy way to tell Windows or Java to use a different port range for client sockets? Microsoft's docs indicate that I can change the high end of that range via the MaxUserPort TcpIP registry setting, but I see no way to change the low end. Update: I've made some progress on this. It looks like Microsoft has a concept of reserved ports that are exceptions to the ephemeral port range. There's a registry setting that lets you change this permanently and apparently there must be an API to do the same thing because there's a data structure that holds high/low values for reserved port ranges, but I can't find the actual function call anywhere... The registry solution may work, but now I'm fixated on this API.

    Read the article

  • function objects versus function pointers

    - by kumar_m_kiran
    Hi All, I have two questions related to function objects and function pointers, Question : 1 When I read the different uses sort algorithm of STL, I see that the third parameter can be a function objects, below is an example class State { public: //... int population() const; float aveTempF() const; //... }; struct PopLess : public std::binary_function<State,State,bool> { bool operator ()( const State &a, const State &b ) const { return popLess( a, b ); } }; sort( union, union+50, PopLess() ); Question : Now, How does the statement, sort(union, union+50,PopLess()) work? PopLess() must be resolved into something like PopLess tempObject.operator() which would be same as executing the operator () function on a temporary object. I see this as, passing the return value of overloaded operation i.e bool (as in my example) to sort algorithm. So then, How does sort function resolve the third parameter in this case? Question : 2 Question Do we derive any particular advantage of using function objects versus function pointer? If we use below function pointer will it derive any disavantage? inline bool popLess( const State &a, const State &b ) { return a.population() < b.population(); } std::sort( union, union+50, popLess ); // sort by population PS : Both the above references(including example) are from book "C++ Common Knowledge: Essential Intermediate Programming" by "Stephen C. Dewhurst". I was unable to decode the topic content, thus have posted for help. Thanks in advance for your help.

    Read the article

  • Adding iPod Support to (previously) iPhone Only App

    - by rjstelling
    When I started on my current project, there was already an App in the App Store. This App was iPhone only. My first task was to test and build a version that also ran on an iPod Touch. About 3 weeks ago Apple removed the option on iTunes connect to set the device requirements. And sent an email out to all developers: "The App Store requires that you provide metadata about your application before submitting it. While most of this metadata is specified using the iPhone Developer Program Portal, the process for selecting device-related dependencies in iTunes Connect is no longer available. Instead, if your app relies on features that are specific to a device, such as the compass on iPhone 3GS, add the UIRequiredDeviceCapabilities key to your app's Info.plist file to indicate the specific hardware feature required." When I compiled the iPod compatible version I set the device requirements (UIRequiredDeviceCapabilities) in the info.plist to: location-services (gps or skyhook) wi-fi (any device) However, as the App was originally uploaded and the option for "iPhone only" set in iTunes connect this appears to be the default. The kicker is, because Apple have removed this feature there is no way to change it! Has anyone come up against this problem? And how did you solve it? Is it possible I have incorrect values in UIRequiredDeviceCapabilities? UPDATE: The app will run fine on a iPod Touch if installed as a development version via Xcode. The problem is on the App Store it is listed as iPhone only and when iPod Touch users search in the App store no results are returned.

    Read the article

  • Strange XCode debugger behavior with UITableView datasource

    - by Tarfa
    Hey guys. I've got a perplexing issue. In my subclassed UITableViewController my datasource methods lose their tableview reference depending on lines of code I put inside the method. For example, in this code block: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 3; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 5; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { id i = tableView; static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... return cell; } the "id i = tableView;" causes the tableview to become nil (0x0) -- and it causes it to be nil before I ever start stepping into the method. If I insert an assignment statement above the "id i = tableview;" statement: CGFloat x = 5.0; id i = tableView; then tableview retains its pointer (i.e. is not nil) if I place the breakpoint after the "id i = tableView;" line. In other words, the breakpoint must be set after the "id i = tableView"; assignment in order for tableView to retain its pointer. If the breakpoint is set before the assignment is made and I just hang at that breakpoint for a bit then after a couple of seconds the console logs this error message: Assertion failed: (cls), function getName, file /SourceCache/objc4_Sim/objc4-427.5/runtime/objc-runtime-new.mm, line 3990. Although the code works when I don't step through the method, I need my debugger to work! It makes programming kind of challenging when your debugging tools become your enemy. Anyone know what the cause and solution are? Thanks.

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Python and Unicode: How everything should be Unicode

    - by A A
    Forgive if this a long a question: I have been programming in Python for around six months. Self taught, starting with the Python tutorial and then SO and then just using Google for stuff. Here is the sad part: No one told me all strings should be Unicode. No, I am not lying or making this up, but where does the tutorial mention it? And most examples also I see just make use of byte strings, instead of Unicode strings. I was just browsing and came across this question on SO, which says how every string in Python should be a Unicode string. This pretty much made me cry! I read that every string in Python 3.0 is Unicode by default, so my questions are for 2.x: Should I do a: print u'Some text' or just print 'Text' ? Everything should be Unicode, does this mean, like say I have a tuple: t = ('First', 'Second'), it should be t = (u'First', u'Second')? I read that I can do a from __future__ import unicode_literals and then every string will be a Unicode string, but should I do this inside a container also? When reading/ writing to a file, I should use the codecs module. Right? Or should I just use the standard way or reading/ writing and encode or decode where required? If I get the string from say raw_input(), should I convert that to Unicode also? What is the common approach to handling all of the above issues in 2.x? The from __future__ import unicode_literals statement? Sorry for being a such a noob, but this changes what I have been doing for a long time and so clearly I am confused.

    Read the article

  • Why does this data YYYY-MM-DD regex fail in Java?

    - by ProfessionalAmateur
    Hello StackOverFlow, My first question and Im excited... I've lurked since go-live and love the site, however I apologize for any newbie errors, formatting, etc... I'm attempting to validate the format of a string field that contains a date in Java. We will receive the date in a string, I will validate its format before parsing it into a real Date object. The format being passed in is in the YYYY-MM-DD format. However I'm stuck on one of my tests, if I pass in "1999-12-33" the test will fail (as it should with a day number 33) with this incomplete pattern: ((19|20)\\d{2})-([1-9]|0[1-9]|1[0-2])-([12][0-9]|3[01]) However as soon as I add the characters in bold below it passes the test (but should not) ((19|20)\\d{2})-([1-9]|0[1-9]|1[0-2])-(0[1-9]|[1-9]|[12][0-9]|3[01]) *additional note, I know I can change the 0[1-9]|[1-9] into 0?[1-9] but I wanted to break everything down to its most simple format to try and find why this isn't working. Here is the scrap test I've put together to run through all the different date scenarios: import java.util.regex.Matcher; import java.util.regex.Pattern; public class scrapTest { public scrapTest() { } public static void main(String[] args) { scrapTest a = new scrapTest(); boolean flag = a.verfiyDateFormat("1999-12-33"); } private boolean verfiyDateFormat(String dateStr){ Pattern datePattern = Pattern.compile("((19|20)\\d{2})-([1-9]|0[1-9]|1[0-2])-(0[1-9]|[1-9]|[12][0-9]|3[01])"); Matcher dateMatcher = datePattern.matcher(dateStr); if(!dateMatcher.find()){ System.out.println("Invalid date format!!! -> " + dateStr); return false; } System.out.println("Valid date format."); return true; } } Ive been programming for ~10 years but extremely new to Java, so please feel free to explain anything as elementary as you see fit.

    Read the article

  • Method of transforming 3D vectors with a matrix

    - by Drew Noakes
    I've been doing some reading on transforming Vector3 with matrices, and am tossing up digging deeper into the math and coding this myself versus using existing code. For whatever reason my school curriculum never included matrices, so I'm filling a gap in my knowledge. Thankfully I only need a few simple things, I think. Context is that I'm programming a robot for the RoboCup 3D league. I'm coding it in C# but it'll have to run on Mono. Ideally I wouldn't use any existing graphics libraries for this (WinForms/WPF/XNA) as all I really need is a neat subset of matrix transformations. Specifically, I need translation and x/y/z rotations, and a way of combining multiple transformations into a single matrix. This will then be applied to my own Vector3 type to produce the transformed Vector3. I've read different advice about this. For example, some model the transformation with a 4x3 matrix, others with a 4x4 matrix. Also, some examples show that you need a forth value for the vector's matrix of 1. What happens to this value when it's included in the output? [1 0 0 0] [x y z 1] * [0 1 0 0] = [a b c d] [0 0 1 0] [2 4 6 1] The parts I'm missing are: What sizes my matrices should be Compositing transformations by multiplying the transformation matrices together Transforming 3D vectors with the resulting matrix As I mostly just want to get this running, any psuedo-code would be great. Information about what matrix values perform what transformations is quite clearly defined on many pages, so need not be discussed here unless you're very keen :)

    Read the article

  • Sql Exception: Error converting data type numeric to numeric

    - by Lucifer
    Hello We have a very strange issue with a database that has been moved from staging to production. The first time the database was moved it was by detaching, copying and reattaching, the second time we tried restoring from a backup of the staging. Both SQL Servers are the same version of MS SQL 2008, running on 64 bit hardware. The code accessing the database is the same build, built using the .net 2.0 framework. Here is the error message and some of the stack trace: Exception Details: System.Data.SqlClient.SqlException: Error converting data type numeric to numeric. Stack Trace: [SqlException (0x80131904): Error converting data type numeric to numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +204 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +175 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +137 Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Handling learning curve for new developers

    - by pete the pagan-gerbil
    Our company likes to hire new developers, with no experience. We have a core set of skills that we try to get them up to speed with, like ASP.NET and WinForms - to teach basic programming, the .NET languages, and the things they'll need to maintain and write. We also try and mentor them through early projects, so they can learn from someone more experienced. Recently, we've been seeing the benefits of new frameworks like MVC and ideas like Unit Testing and TDD (by extension, dependancy injection and IoC), and we'd like to start using these in the team. However, this increases the time that a junior would have before they can get started on a new project - because doing something like unit tests wrong could cause major headaches months or years later in maintenance, especially if we believe unit tests to be comprehensive. How do you handle the huge amount of things that a junior will need to take on, acknowledging that the business wants them working independantly as soon as possible? Is it acceptable to tell them not to unit test till a while after they are independant (and give them small, simpler projects in the meantime) before taking them to 'level 2' of the core skills?

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • Get a "sqlceqp35.dll" error when debugging but not when running deployed code.

    - by nj
    In our current windows mobile project a problem while debugging recently arised. When trying to debug the code it throws an exception on the open command on a connection to the local database. The message is "A SQL Server Compact DLL could not be loaded. Reinstall SQL Server Compact Edition. [ DLL Name = sqlceqp35.dll ]". Sometime it's an unknow error instead, with reference to the same file. If you execute the binary, thats deployd during the debug, on the device it runs without any problem. I've tried: Reinstall both .net and sqlce on the device. Changed the "specific version" on the reference properties in the project. The hardware I'm using is a Motorola MC70 with Windows mobile 5.0. The target platform of the project is windows mobile 5.0. Any ideas on what might cause this problem? EDIT: When I tried it on a MC75 I can debug it. The MC70 got OS Version: 05.01.0478 and the MC75 05.01.0478. My best guess now is that it's someway related to the OS version or the actual device.

    Read the article

  • How to draw some lines in a view element defined in the xml layout

    - by Nils
    Hello, I have problems drawing some simple lines in a view object (Android programming). First I created the layout with the view element(kind of painting area) in it (XML file). [...] < View android:id="@+id/viewmap" android:layout_width="572px" android:layout_height="359px" android:layout_x="26px" android:layout_y="27px" [...] ... and tried then to access it to draw some lines. Unfortunately the program is running and other UI elements like buttons are displayed, but I can't see the drawings. What's wrong ? [...] viewmap = (View) findViewById(R.id.viewmap); Canvas canvas = new Canvas(); viewmap.draw(canvas); Paint p = new Paint(); p.setColor(Color.BLUE); p.setStyle(Paint.Style.STROKE); canvas.drawColor(Color.WHITE); p.setColor(Color.BLUE); canvas.drawLine(4, 4, 29, 5, p); p.setColor(Color.RED); viewmap.draw(canvas); [...] Thanks for help :) !

    Read the article

  • Are there any configurable parameters to the gpsd?

    - by danatel
    I use the gpsd daemon with my application. Sometimes, the gpsd ceases to work with no apparent reason (clean sky). Even the gpsmon monitor shows no fix. Are there any parameters which must be set? Or is it a hardware problem? I am surprissed that many satellites are visible but the "Stat" bitmap does not contain the bit 7 - ephemeris data available. Should i somewhat pre-configure my position to allow for correct ephemeris data? Here is my gpsmon screen: 127.0.0.1:2947:/dev/ttyS3 SiRF binary> ^[[4~ -¦¦¦¦¦¦¦¦¦¦¦ X ¦¦¦¦¦¦ Y ¦¦¦¦¦¦ Z ¦¦¦¦¦¦¦¦¦¦ North ¦¦¦¦ East ¦¦¦¦¦ Alt ¦¦¦¦¦¦¦¦¦¬ -Pos: 3949260 1166016 4856299 m 49.89411° 16.44920° 1379 m - -Vel: 0.0 0.0 0.0 m/s 0.0 0.0 0.0 climb m/s- -Week+TOW:1578+224837.06 Day: 2 14:27:17.06 Heading: 0.0° 0.0 speed m/s- -Skew: -13.025817 TZ: -7200 HDOP: 0.0 M1:00 M2: 00 - -Fix: 0 = - L¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ Packet type 2 (0x02) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬-¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ -Ch PRN Az El Stat C/N ? A --Version: - - 0 2 243 19 003f 40.4 -L¦¦¦¦¦¦¦ Packet Type 6 (0x06) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 1 10 249 68 003f 43.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 2 13 90 30 003f 40.9 --SVs: 0 Drift: 96506 Bias: 135976716 - - 3 7 66 67 003f 39.8 --Estimated GPS Time: 224837059 - - 4 5 295 49 003d 39.7 -L¦¦¦¦¦¦¦ Packet type 7 (0x07) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 5 8 210 69 003f 41.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 6 23 96 5 002d 28.0 --Max: 167.570Lat: 132.129Time: 0.075 MS: 02 - - 7 6 43 3 002d 23.1 -L¦¦¦¦¦¦¦ Packet type 9 (0x09) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 8 28 163 16 003f 39.8 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 9 0 0 0 0000 0.0 --SVs: 11 = 8 10 7 5 13 2 28 23 3 6 4 - -10 3 55 4 002d 24.7 -L¦¦¦¦¦¦¦ Packet type 13 (0x0D) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -11 0 0 0 0000 0.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ L¦¦¦ Packet Type 4 (0x04) ¦¦¦--DGPS source: 1 (SBAS) Corrections: 12 - L¦¦¦¦¦¦¦ Packet type 27 (0x1B) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦-q

    Read the article

  • Hiring my first employee

    - by Ady
    A few years ago I moved to a new job having been programming for 2 years using C#, however this new company was mainly using VB6. I made the case for .NET and won, but one of the consessions I had to make was to use VB.NET and not C# (understandable as most of the other developers were already using VB). Three years later it was time to move on, but when applying for jobs I couldn't get past the recruitment agents. I realised that when they were looking at the basic requirements (5 years experience) that they could not add 2 and 3 together to make 5. They were looking for 5 years in VB or C# not across both. Frustrated I decided to combine my skills with a designer friend and start my own company. After two years of hard graft we are now looking for our first employee (a programmer), and this question has hit me again, but now I see the employers perspective. Why take the risk of someone getting up to speed when you have thousands of applicants to choose from. So my question is this, if I define the requirements to be too narrow, I could miss the really great candidates. But if they are too broad it's going to take ages to go through them all. This will be our first 'employee' so the choice needs to be good, I can't afford to make a mistake and employ someone naff. Another option would be to choose a bright university graduate, and train them up (less of a risk because we can pay them less). What have others done in this situation, and what would you recommend I do?

    Read the article

  • super light software development process

    - by Walty
    hi, For the development process I have involved so far, most have teams of SINGLE member, or occasionally two. We used python + django for the major development, the development process is actually very fast, and we do have code reviews, design pattern discussions, and constant refactoring. Though team size is small, I do think there are some development processes / best practices that could be enforced. For example, using svn would be definitely better than regular copy backup. I did read some articles & books about Agile, XP & continuous integration, I think they are nice, but still too heavy for this case (team of 1 or 2, and fast coding). For example, IMHO, with nice design pattern, and iterative development + refactoring, the TDD MIGHT be an overkill, or at least the overhead does not out-weight the advantages. And so is the pair programming. The automated testing is a nice idea, but it seems not technically feasible for every project. our current practices are: svn + milestone + code review I wonder if there are development processes / best practices specifically targeted on such super light teams? thanks.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • ASP.NET retrieve Average CPU Usage

    - by Sam
    Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes. The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine. My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently. Is there an easy way to get the average CPU usage? Are there any drawbacks to this? I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >