Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 221/331 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • What benefits can Java developer have moving to a *NIX platform?

    - by dave-keiture
    Hi everyone, A friend of mine is a Java developer, who's using *NIX for ages. He claims that *NIX is for real Java geeks, whereas WIN is for dummies (and I'm one of them, according to him) and girls. When I ask him to argue his position, and explain, what's so good for Java developer on *NIX, he starts talking about console, wget, curl and grep. But sorry, wget and curl analogues exist for the WIN platform as well. As for the console - I'm using FAR Commander, and have access to the command line when I need. Moreover, even if I decide moving to *NIX, I will certainly use Netbeans or Eclipse there, so there will be no big difference. Guys, who use Java on *NIX, could you please give me a real killer examples, when *NIX (any util or technique) dramatically increases Java development productivity (in the way the hints are given in "The Pragmatic Programmer"), or, which is also important, gives more fun from the process. Thanks in advance!

    Read the article

  • Assigning large UInt32 constants in VB.Net

    - by Kumba
    I inquired on VB's erratic behavior of treating all numerics as signed types back in this question, and from the accepted answer there, was able to get by. Per that answer: Visual Basic Literals Also keep in mind you can add literals to your code in VB.net and explicitly state constants as unsigned. So I tried this: Friend Const POW_1_32 As UInt32 = 4294967296UI And VB.NET throws an Overflow error in the IDE. Pulling out the integer overflow checks doesn't seem to help -- this appears to be a flaw in the IDE itself. This, however, doesn't generate an error: Friend Const POW_1_32 As UInt64 = 4294967296UL So this suggests to me that the IDE isn't properly parsing the code and understanding the difference between Int32 and UInt32. Any suggested workarounds and/or possible clues on when MS will make unsigned data types intrinsic to the framework instead of the hacks they currently are?

    Read the article

  • Need to read .symtab

    - by user361190
    I am frustrated. I have a simple doubt .. I compile a simple program with gcc and if I see the section header using objdump, it does not show the section ".symtab". for the same a.out file, readelf shows the section. see the below snip - [25] .symtab SYMTAB 00000000 000ca4 000480 10 26 2c 4 [26] .strtab STRTAB 00000000 001124 00025c 00 0 0 1 Why ? In the default linker script I don't find a definition for .symtab section. If I add a definition by myself like in the linker script : .... PROVIDE(__start_sym) .symtab : { *(.symtab)} PROVIDE(__end_sym) .... the difference b/w the addresses of __start_sym and __end_sym is zero, which means no such section is added in the output file. But the readelf is able to read the section and dump the contents of this section .. How ? why ?

    Read the article

  • Mozilla Firefox border rendering

    - by zA
    Hi, I've come across a strange thing in Firefox, which has become a problem to me. It seems that Firefox renders borders thinner than other browsers. For example I have just a simple empty div element, and nothing else on the webpage, with a border set to width:3px. In all other browsers, such as IE, Opera, Chrome and Safari, the width looks the same and is in fact 3px wide. But in Firefox I noticed that the border width seemed thinner. So I checked the border width with Firebug, under the Computed tab - Box model. And yes as I suspected, the rendered border in Firefox is thinner. The border width that Firefox rendered is actually 2.2px and not the expected 3px. This small difference with Firefox completely messes up my design. Has anyone else noticed this? Does anyone have a solution for this? Thanks in advance!

    Read the article

  • Count of memory copies in *nix systems between packet at NIC and user application?

    - by Michael_73
    Hi there, This is just a general question relating to some high-performance computing I've been wondering about. A certain low-latency messaging vendor speaks in its supporting documentation about using raw sockets to transfer the data directly from the network device to the user application and in so doing it speaks about reducing the messaging latency even further than it does anyway (in other admittedly carefully thought-out design decisions). My question is therefore to those that grok the networking stacks on Unix or Unix-like systems. How much difference are they likely to be able to realise using this method? Feel free to answer in terms of memory copies, numbers of whales rescued or areas the size of Wales ;) Their messaging is UDP-based, as I understand it, so there's no problem with establishing TCP connections etc. Any other points of interest on this topic would be gratefully thought about! Best wishes, Mike

    Read the article

  • CLI design and implementation?

    - by Majid
    I am developing a time management tool for my personal use. I prefer using keyboard over mouse, and on the interface have a general purpose text box which will act like a command line. I have just started thinking about what commands I need, what to use for the command names, how to pass in switches and parameters, and so forth. I wonder if some of you have come across a good read along these lines; something that describes the choices you have for designing a cli, and how those affect the complexity of the interpreter, and extendability of the commands. It makes no difference if the descriptions are language-specific or in general terms. However, my implementation will be with javascript. Thank you.

    Read the article

  • Android: various questions about GPS

    - by wei
    I'm writing my first location based android app, but got confused about some of the GPS service api. Here are some questions I have: 1) To get my current location, I called requestLocationUpdates() with a listener in the onCreate() method of one activity. But what happens when another activity starts and the current activity goes invisible? Is the GPS location update going to stop? If so, how do I keep it on after the activity is switched? 2) how accurate is the Location.getSpeed()? How is it computed? Can it tell the difference between on bicycle and on foot? 3) not really a question about android. How to calculate the coordinates of a location, say, 100m away from my current location? 4) To stop the GPS, I only need to remove all the listeners that have been registered to locationmanager? Thanks a lot!

    Read the article

  • Move Global.asax to iHttpModule when using ASP.NET MVC

    - by rockinthesixstring
    I have successfully created an iHttpModule to replace my Global.asax file in many Web Forms applications. But in looking at the Global.asax file in my MVC application, the methods are totally different. I'm wondering if it is still possible to create this same thing in an MVC app. I know it's not necessary and the Global.asax works just fine. I suppose I just want to have nothing but the web.config in the root directory of my application. Also, I am putting all of my classes in a separate class library project instead of a folder in my MVC application. Not sure if this makes a difference or not.

    Read the article

  • Different Paramater Value Results In Slow Query

    - by alphadogg
    I have an sproc in SQL Server 2008. It basically builds a string, and then runs the query using EXEC(): SELECT * FROM [dbo].[StaffRequestExtInfo] WITH(nolock,readuncommitted) WHERE [NoteDt] < @EndDt AND [NoteTypeCode] = @RequestTypeO AND ([FNoteDt] >= @StartDt AND [FNoteDt] <= @EndDt) AND [FStaffID] = @StaffID AND [FNoteTypeCode]<>@RequestTypeC ORDER BY [LocName] ASC,[NoteID] ASC,[CNoteDt] ASC All but @RequestTypeO and @RequestTypeF are passed in as sproc parameters. The other two are built from a parameter into local variables. Normally, the query runs under one second. However, for one particular value of @StaffID, the execution plan is different and about 30x slower. In either case, the amount of data returned is generally the same, but execution time goes way up. I tried to recompile the sproc. I also tried to "copy" @StaffID into a local @LocalStaffID. Neither approach made any difference. Any ideas?

    Read the article

  • getcwd for current location based on ftp account permission

    - by John Doe
    Hello. I'm trying to make a small script that's changing the permission for specific file using a ftp connection. My problem is the absolute path. I have a ftp account wich land on the script directory (/script/). If i'm using getcwd, it will return the whole path (/home/user/public_html/script) but i need only the difference between the full path and the current path (getcwd path: /home/user/public_html/script/ ftp landing path: /script). So, how can i use getcwd to get the current directory for a ftp account? For example, if the user is landing in public_html, the path to the script will be /script/, or if he is landing inside /user, the path will be /public_html/script. Thanks

    Read the article

  • Is there a function that can read a php function post-parsing?

    - by Rob
    I've got a php file echoing hashes from a MySQL database. This is necessary for a remote program I'm using, but at the same time I need my other php script opening and checking it for specified strings POST parsing. If it checks for the string pre-parsing, it'll just get the MySQL query rather than the strings to look for. I'm not sure if any functions do this. Does fopen() read the file prior to parsing? or file_get_contents()? If so, is there a function that'll read the file after the php and mysql code runs? The file with the hashes query and echo is in the same directory as the php file reading it, if that makes a difference. Perhaps fopen reads it post-parse, and I've done something wrong, but at first I was storing the hashes directly in the file, and it was working fine. After I changed it to echo the contents of the MySQL table, it bugged out.

    Read the article

  • Problem doing SVN Vendor Branch - merge

    - by Gyan
    Hi, I am trying to use the svn vendor branch to upgrade the third party library. (We have modified the source code) I followed all the steps to create the vendor branch:: created the vendor branch for old version (3rd party library) created the vendor branch for latest version (3rd party library) copied the latest version to current folder using (usign svn_load_dirs.pl script) structure of vendor repository in svn URL/vendor/library/3.5.0 URL/vendor/library/3.7.0 URL/vendor/library/current I have the library-3.5.0 used/modified at URL/trunk/library/customized-library I have a problem when I try to merge the difference between URL/vendor/library/3.7.0 and URL/vendor/library/3.5.0 to URL/trunk/library/customized-library I am at the folder where URL/trunk/library/customized-library is checked out and I use following command to do the merge svn merge URL/vendor/library/3.5.0 URL/vendor/library/current . --accept PARAMETERS when I use theirs-conflict for accept parameter, It ignores all of my changes to the old version and copies files from 3.7.0 when I user mine-conflict, it ignores the files in 3.7.0 when I use postpone, it throws exception "tree conflict" Thanks Gyan

    Read the article

  • Disable IPV6 on specific NIC via PowerShell using a Com Object on Windows Server 2008 R2?

    - by user1256194
    I need to script some Windows Server 2008 R2 builds, preferably in PowerShell. I need to disable or uncheck IPV6 on a specific NIC (the same NIC every time). Currently, I have to set it manually. I do not want to disable IPV6 completely for the entire server other things may use that in the future. Is there an object I can reference in a PowerShell command specifying my NIC "Intel(R) PRO/1000 MT Network Connection" and disable IPV6? Unfortunately, Group Policy is not an option says the boss. I've tried finding an appropriate WMI object via "PowerShell Scriptomatic" but failed to find the difference between an enabled setting versus disabled on the Intel NIC. Thanks in advance.

    Read the article

  • SQL Server mirroring connection doesnt work

    - by StNickolas
    I have 2 servers srv-erp1 and srv-erp3. I made them mirror on each other. All setup is done by lots of tutorials and examples. But when I call ALTER DATABASE MIRROR_TEST SET PARTNER = 'TCP://srv-erp3:5022' It`s response is: The server network address "TCP://srv-erp3:5022" can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. I go to cmd on srv-erp3 and use netstat -an... this port is listening. I go to cmd on srv-erp1 and use telnet srv-erp3 5022...and its ok to connect. All firewalls are turned off. The only difference in config of srvrs is that srv-erp1 is on Windows Server 2003 R2 x64, and srv-erp3 is on Windows 2008 R2 x64 What can be the reason of this problem? Regards, Dmitry.

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • Kernel thread exit in linux

    - by Raffo
    Hi guys, I'm here to ask you the difference between a process and a thread in linux. I know that a thread for linux is just a "task", which shares with the father process things that they need to have in common (the address space and other important informations). I also know that the two are creating calling the same function ('clone()'), but there's still something that I'm missing: what really happens when a thread exit? What function is called inside the linux kernel? I know that when a process exits calls the do_exit function, but here or somewhere else there should be a way to understand if it is just a thread exiting or a whole process. Can you explain me this thing or redirect to some textbook?? I tried 'Understanding the linux kernel' but I was not satisfied with it. I'm asking this thing because a need to add things to the task_struct struct, but I need to discriminate how to manage those informations for a process and its children. Thank you.

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • Real thing about "->" and "."

    - by fsdfa
    I always wanted to know what is the real thing difference of how the compiler see a pointer to a struct (in C suppose) and a struct itself. struct person p; struct person *pp; pp->age, I always imagine that the compiler does: "value of pp + offset of atribute "age" in the struct". But what it does with person.p? It would be almost the same. For me "the programmer", p is not a memory address, its like "the structure itself", but of course this is not how the compiler deal with it. My guess is it's more of a syntactic thing, and the compiler always does (&p)->age. I'm correct?

    Read the article

  • What is the purpose of OCaml's Lazy.lazy_from_val?

    - by Ricardo
    The doc of Lazy.lazy_from_val states that this function is for special cases: val lazy_from_val : 'a -> 'a t lazy_from_val v returns an already-forced suspension of v This is for special purposes only and should not be confused with lazy (v). Which cases are they talking about? If I create a pair of suspended computation from a value like: let l1 = lazy 123 let l2 = Lazy.lazy_from_val 123 What is the difference between these two? Because Lazy.lazy_is_val l1 and Lazy.lazy_is_val l2 both return true saying that the value is already forced!

    Read the article

  • How to automate testing of a browser-based app?

    - by mawg
    If it were a windows program, I would use Auto it to automate testing. Is there something similar for browser-based apps? Nothing too complex, it should just allow scripting (preferable for me to macro-recording) to simulate human interaction with the browser, which means being able to identify fields of a form by name, inject text into some, simulate mouse-click on others, etc and then, after submitting a form, should be able to read text certain named controls, check the status of others (checked, radio group index, read-only, etc). While I do appreciate a full featured product, I don't appreciate a steep learning curve. so something as simple as the scripting of Auto It woudl be fine. I don't know if it makes a difference which browser is used, but I could live with MSIE 6 or higher (maybe 7 or higher at a push).

    Read the article

  • How efficient is an if statement compared to a test that doesn't use an if? (C++)

    - by Keand64
    I need a program to get the smaller of two numbers, and I'm wondering if using a standard "if x is less than y" int a, b, low; if (a < b) low = a; else low = a; is more or less efficient than this: int a, b, low; low = b + ((a - b) & ((a - b) >> 31)); (or the variation of putting int delta = a - b at the top and rerplacing instances of a - b with that). I'm just wondering which one of these would be more efficient (or if the difference is to miniscule to be relevant), and the efficiency of if-else statements versus alternatives in general.

    Read the article

  • How do I set up a test duplicate of a Django and Postgresql based web application?

    - by cojadate
    Not sure if this is an excessively broad and newbie-ish question for Stack Overflow but here goes: I paid someone else to build a web application for me and now I want to tweak certain aspects of it myself. I learn best by trial and error – changing stuff and seeing what happens. Obviously that's not a great way to treat a live site, so I need to duplicate the site on some kind of test server which I can play with without fear of the consequences. Unfortunately the closest I've come to programming has been creating ActionScript-based websites. I've never touched a database. So I really don't know where to start with setting up a test server. I would really appreciate any advice about where to start. I am completely ignorant and lost here. The web application is built in python/django using a Postgresql database. I use Mac OS X 10.6 if that makes any difference.

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Cocoa Screensaver Framework error message

    - by Veljko Skarich
    Hi, I'm trying to make a screen saver using the cocoa screensaver framework. The project builds fine and generates the .saver file, but when I try to run it in the preferences test window, it displays the error message: "You cannot use the screen saver with this version of Mac OSX. Please contact the vendor to get a newer version of the screen saver" I have the xcode settings to Release | x86_64, and I am running OSX 10.6.6 on a 2.4 GHz Intel Core i5 Macbook Pro. I've searched around online, and most of the solutions to this error message seem addressed to making sure the build is 64-bit, which the x86_64 setting should indeed take care of. I am trying to play a QT movie in the screensaver, if that makes any difference. I am at a loss, any help would be appreciated. Thank you.

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >