Search Results

Search found 23827 results on 954 pages for 'software architecture'.

Page 183/954 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Bash can't start a programme that's there and has all the right permissions

    - by Rory
    This is a gentoo server. There's a programme prog that can't execute. (Yes the execute permission is set) About the file $ ls prog $ ./prog bash: ./prog: No such file or directory $ file prog prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped $ pwd /usr/local/bin $ /usr/local/bin/prog bash: /usr/local/bin/prog: No such file or directory $ less prog | head ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Intel 80386 Version: 0x1 I have a fancy less, to show that it's an actual executable, here's some more data: $ xxd prog |head 0000000: 7f45 4c46 0101 0100 0000 0000 0000 0000 .ELF............ 0000010: 0200 0300 0100 0000 c092 0408 3400 0000 ............4... 0000020: 0401 0a00 0000 0000 3400 2000 0700 2800 ........4. ...(. 0000030: 2600 2300 0600 0000 3400 0000 3480 0408 &.#.....4...4... 0000040: 3480 0408 e000 0000 e000 0000 0500 0000 4............... 0000050: 0400 0000 0300 0000 1401 0000 1481 0408 ................ 0000060: 1481 0408 1300 0000 1300 0000 0400 0000 ................ 0000070: 0100 0000 0100 0000 0000 0000 0080 0408 ................ 0000080: 0080 0408 21f1 0500 21f1 0500 0500 0000 ....!...!....... 0000090: 0010 0000 0100 0000 40f1 0500 4081 0a08 ........@...@... and $ ls -l prog -rwxrwxr-x 1 1000 devs 725706 Aug 6 2007 prog $ ldd prog not a dynamic executable $ strace ./prog 1249403877.639076 execve("./prog", ["./prog"], [/* 27 vars */]) = -1 ENOENT (No such file or directory) 1249403877.640645 dup(2) = 3 1249403877.640875 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 1249403877.641143 fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 1249403877.641484 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b3b8954a000 1249403877.641747 lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 1249403877.642045 write(3, "strace: exec: No such file or dir"..., 40strace: exec: No such file or directory ) = 40 1249403877.642324 close(3) = 0 1249403877.642531 munmap(0x2b3b8954a000, 4096) = 0 1249403877.642735 exit_group(1) = ? About the server FTR the server is a xen domU, and the programme is a closed source linux application. This VM is a copy of another VM that has the same root filesystem (including this programme), that works fine. I've tried all the above as root and same problem. Did I mention the root filesystem is mounted over NFS. However it's mounted 'defaults,nosuid', which should include execute. Also I am able to run many other programmes from that mounted drive /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 1 cpu MHz : 2992.692 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr bogmips : 5989.55 clflush size : 64 cache_alignment : 128 address sizes : 36 bits physical, 48 bits virtual power management: Example of a file that I can run I can run other programmes on that mounted filesystem on that server. For example: $ ls -l ls -rwxr-xr-x 1 root root 105576 Jul 25 17:14 ls $ file ls ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped $ ./ls attr cat cut echo getfacl ln more ... (you get the idea) ... rmdir sort tty $ less ls | head ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Why can't we capture the design of software more effectively?

    - by Ira Baxter
    As engineers, we all "design" artifacts (buildings, programs, circuits, molecules...). That's an activity (design-the-verb) that produces some kind of result (design-the-noun). I think we all agree that design-the-noun is a different entity than the artifact itself. A key activity in the software business (indeed, in any business where the resulting product artifact needs to be enhanced) is to understand the "design (the-noun)". Yet we seem, as a community, to be pretty much complete failures at recording it, as evidenced by the amount of effort people put into rediscovering facts about their code base. Ask somebody to show you the design of their code and see what you get. I think of a design for software as having: An explicit specification for what the software is supposed to do and how well it does it An explicit version of the code (this part is easy, everybody has it) An explanation for how each part of the code serves to achieve the specification A rationale as to why the code is the way it is (e.g., why a particualr choice rather than another) What is NOT a design is a particular perspective on the code. For example [not to pick specifically on] UML diagrams are not designs. Rather, they are properties you can derive from the code, or arguably, properties you wish you could derive from the code. But as a general rule, you can't derive the code from UML. Why is it that after 50+ years of building software, why don't we have regular ways to express this? My personal opinion is that we don't have good ways to express this. Even if we do, most of the community seems so focused on getting "code" that design-the-noun gets lost anyway. (IMHO, until design becomes the purpose of engineering, with the artifact extracted from the design, we're not going to get around this). What have you seen as means for recording designs (in the sense I have described it)? Explicit references to papers would be good. Why do you think specific and general means have not been succesful? How can we change this?

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • What are some techniques to monitor multiple instances of a piece of software?

    - by Geo Ego
    It was recommended that I ask this question here by a member of StackOverflow. I have a piece of self-serve kiosk software that will be running at multiple sites. I'd like to monitor their status remotely. The kiosk application itself is pretty much finished. I am now in the process of creating a piece of software that will monitor all of the kiosks from a central location so that the customer can view particular details remotely (for instance, how many bills are in the acceptor's cash cartridge, what customer is currently logged in, etc.). Because I am in such an early stage of development, my options are quite open. I understand that I'm not giving very many qualifications, but I'd like to try to get a good variety of potential solutions. Some details: Kiosk software is a VB6 app running on Windows Embedded Monitoring software will be run on a modern desktop version of Windows (either XP, Vista, or 7) Database is SQL Server 2008 My initial idea was to develop a .NET app that would simply report the last database transaction for each kiosk at a set interval (say every second or so) but I'd really like for the kiosk software to report its status in real-time. I'm not exactly sure where to begin in terms of what modifications may need to be made to the kiosk software, and what the monitoring software will require. Links to articles on these topics would be most welcome.

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Graphical MySQL tools

    - by Shlomo Shmai
    Are there any good graphical tools (preferably free) for navigating a MySQL database? I find myself doing a lot of the same SQL queries to look at data in the tables. I would imagine there's a GUI for doing this that makes life easier. Any one know of such a thing? Thanks a lot.

    Read the article

  • Understanding the nop byte(s)

    - by Cole Johnson
    Ok, so I was reading through the AMD64 manuels and knowing that nop is really an xchg eax, eax, I looked at the xchg and found something interesting, that it seems a byte can be encoded into the instruction for specifying the registers (apologies I'm on my iPod): picture. So what I am wondering is how does the processor know if there is a byte after to work with or is it that that extra register has to be of type rAX causing it to actually still be the one byte 0x90

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Error message when running "make" command: /usr/bin/ld: i386 architecture of input file is incompatible with i386:x86-64 output

    - by user784637
    I am unable to create a working executable file by running the make command in a tree previously built on an i386 machine. I'm getting an error message in the form of me@me-desktop:~$ make /usr/bin/ld: i386 architecture of input file `../.. /Lib/libProgram.a(something.o)' is incompatible with i386:x86-64 output I've been told and reassured that this program has been tested and successfully compiled on 64-bit Fedora. I'm running a 64-bit machine me@me-desktop:~$ uname -m x86_64 I'm running Ubuntu 10.04 me@me-desktop:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.3 LTS Release: 10.04 Codename: lucid I'm using g++ # me@me-desktop:~$ g++ --version g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I'm also using libtool # me@me-desktop:~$ libtool --version ltmain.sh (GNU libtool) 2.2.6b Written by Gordon Matzigkeit <[email protected]>, 1996 Any clues as to what is going wrong?

    Read the article

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • Difference between “system-on-chip” and “CPU”

    - by Tim
    Very confused, in some websites, they have this line: iPhone 5s CPU: Apple A7 other websites saying that: iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core other sources saying that iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core Apple 7 In Wikipedia, it said: The Apple A7 is a 64-bit system on a chip (SoC) designed by Apple Inc. It first appeared in the iPhone 5S, which was introduced on September 10, 2013. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor, the Apple A6. While not the first 64-bit ARM CPU, it is the first to ship in a consumer smartphone or tablet computer. There are 2 sentences: The Apple A7 is a 64-bit system on a chip (SoC) and While not the first 64-bit ARM CPU Wikipedia also said “The A7 features an Apple-designed 64-bit 1.3–1.4 GHz ARMv8-A dual-core CPU, called Cyclone”. So System on chip is also CPU? very confused

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • I am confused between PHP and ASP.NET to choose as a career in Indian software development context

    - by Confused_Guy
    I need your help(specially from the software professionals of India). I have completed MCA in 2009. After that instead of joining software company I did a Teaching job nearby by home. In the mean time I prepared myself for public sector jobs(bank). I continued my job for 1 year more and left it in 2010. Now in 2012 ,I feel that I should have done the software jobs,so that I could earn my bread and butter and in the mean time I could have prepared for the job.Because,according to my qualification it will give me the best salary. Now I want to go back in software industries. Now all of them are asking for experiences.And I don't have any.....So which language should I learn? And what should I do,because I have two year gap. Some of my friends suggested me to go with PHP as its easier and quicker to get job in India. But Here the PHP guys are getting less salary as compared to ASP.NET. I am planning to begin with PHP and but is it possible to switch to ASP.NET after two years experience. JAVA: I know upto servlet & JSP. Which is nothing in current market. ASP.NET: I know the basics of asp.net upto database connection ie(Gridview). PHP: Only the basics. So what should I do now. Which is most demanding. Does PHP is good, I feel its more like JSP pages. Please guide me, All your suggestions are needed for me.

    Read the article

  • How can I get non-programmer colleagues on board with bespoke software rather than Dynamics CRM + Sharepoint?

    - by Bendos
    I am working with a company which designs and builds one-off machines. They have been 'dabbling' with hosted Dynamics CRM and Sharepoint (on different servers!) in an attempt to centralise their data and help colleagues collaborate more effectively across projects. They haven't used either system to their potential. Now we are looking at the engineering department who already use a form of version control software for the various CAD files (Autodesk Vault) however it is becoming increasingly necessary to implement more of a generic file version control system as they use many more files than can be managed in Vault (sometimes just photos or scans of paper documents), hence why they were looking at using Sharepoint. However... as the 'programmer' of the bunch, I can see several scenarios which don't seem to fit well with the Dynamics + Sharepoint approach; simple reports based on cross-table queries, exporting certain metrics as a spreadsheet, defining project hierarchies and many-many relationships, and as such I have been pushing for an in-house developed 'ECM' / 'ERP' software package (perhaps in .NET or php). Some colleagues seem to attach a greater value to the MS software (perhaps becuase it has a logo!) but don't see that it's just a framework, not a solution. Can anyone provide a good example of when custom software would actually be better than using Dynamics + Sharepoint and how do I relate that to non-technical staff?

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • How can I get better at explaining complex software processes to developers?

    - by Lostsoul
    I'm really struggling with my software specs. I am not a professional programmer but enjoy doing it for fun and made some software that I want to sell later but I'm not happy with the code quality. So I wanted to hire a real developer to rewrite my software in a more professional way so it will be maintainable by other developers in the future. I read and found some sample specs and made my own by applying their structure to my document and wanted to get my developer friend to read it and give me advice. After an hour and a half he understood exactly what I was trying to do and how I did it(my algorithms,stack,etc.). How can I get better at explaining things to developers? I add many details and explanations for everything(including working code) but I'm unsure the best way I can learn to pass detailed domain knowledge(my software applies big data, machine learning, graph theory to finance). My end goal is to get them to understand as much as possible from the document and then ask anything they do not understand, but right now it seems they need to extract alot of information from me. How can I get better at communicating domain knowledge to developers?

    Read the article

  • Install correct libraries depending on 64/32 bit

    - by Rich
    I am using Bash to install a customised version of JBoss, and one of the things I would like to do is install the correct version of the Apache Portable Runtime, which is a native binary. This script could be run on both 32 and 64 bit versions of RHEL. What are my options for identifying which version of the APR to install? I think we only have 32bit and x64-based systems here. I would still like to identify i64 systems so that the script can refuse to install on that type of machine. I am aware of using uname -m and grepping /proc/cpuinfo to find out, but was wondering which approach others would recommend?

    Read the article

  • equivalent to CDN but for dynamical content?

    - by ajsie
    so i know for serving static elements (css, js, images, videos etc) you should use CDN since they are spread out throughout the world. but how could i spread out by apache servers? is there an equivalent to CDN but for dynamical pages? or is it the traditional LAMP way. if so, i guess my best option is to find an international hosting provider that hosts in different countries, so the content will be served from the country located nearest the client machine. any suggestions of such hosting providers? or is it best practice to contact different hosting providers in different countries that do not relate to each other. what is the right way to go?

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Java Robot key activity seems to stop working while certain software is running

    - by Mike Turley
    I'm writing a Java application to automate character actions in an online game overnight (specifically, it catches fish in Final Fantasy XI). The app makes heavy use of java's Robot class both for emulating user keyboard input and for detecting color changes on certain parts of the screen. It also uses multithreading and a swing GUI. The application seems to work perfectly when I test it without the game running, just using screenshots to trigger the apps responses into notepad. But for some reason, when I actually launch FFXI and start the program, all of my keyboard and mouse manipulations just stop working altogether. The program is still running, and the Robot class is still able to read pixel colors. But Robot.keyPress, Robot.keyRelease, Robot.mouseMove, Robot.mousePress and Robot.mouseRelease all do nothing. It's the strangest thing-- to test it, I wrote a simple loop that just keeps typing letters, and focused notepad. I'd then start the game, refocus notepad, and it would do nothing. Then I'd exit the game, and it'd start working again immediately. Has anyone else come across something like this, where specific software will stop certain functions of java from working? Also, to make this more interesting-- Last year I wrote a very similar program using the same classes and programming techniques to automate healing a party in the game as they fight. Last year, this program worked perfectly. After running into these problems I dug up that old program, ran it without making any changes, and found that it too was having the same problems. The only differences between now and when it was working: I was running Windows Vista and now I'm running Windows 7, and several new Java versions as well as FFXI versions have been released. What the hell is going on? (if anyone needs to see my source code, email me at [email protected]. I'm trying to keep it to myself.)

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >