Search Results

Search found 21053 results on 843 pages for 'process'.

Page 470/843 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • Running Multiple Instances of Firefox

    - by Aaron Bush
    I am running SysInternal's Desktops 1.02 and FireFox 3.6.2. I have noticed that while I can have IE8 open in multiple Virtual Desktops, you can not Firefox. If you try you get the error message: Firefox is already running, but is not responding. To open a new window, you must close the existing FireFox process, or restart your system. I did a little digging around to work around this and came up with creating a second profile via the Firefox profile manager (accessed by starting FF with the "-p" switch). This unfortunately created a new problem which is my add-ons (of which I use many) do not stay synchronized between profiles. Is there a better approach here?

    Read the article

  • Connect to bluetooth device from command line

    - by Ilari Kajaste
    Background: I'm using my bluetooth headset as audio output. I managed to get it working by the long list of instructions on BluetoothHeadset community documentation, and I have automated the process of activating the headset as default audio output into a script, thanks to another question. However, since I use the bluetooth headset with both my phone and computer (and the headset doesn't support two input connections) in order for the phone not to "steal" the connection when handset is turned on, I force the headset into a discovery mode when connecting to the computer (phone gets to connect to it automatically). So even though the headset is paired ok and would in "normal" scenario autoconnect, I have to always use the little bluetooth icon in the notification area to actually connect to my device (see screenshot). What I want to avoid: This GUI for connecting to a known and paired bluetooth device: What I want instead: I'd want to make the bluetooth do exactly what the clicking the connect item in the GUI does, only by using command line. I want to use command line so I can make a single keypress shortcut for the action, and would't need to navigate the GUI every time I want to establish a connection to the device. The question: How can I attempt to connect to a specific, known and paired bluetooth device from command line? Further question: How do I tell if the connection was successful or not?

    Read the article

  • Checking for orphaned snapshots - ESXi5

    - by Tim Alexander
    So we had some issues with our passive mail node over the weekend doing vmtools updates and to resolve a problem we had to revert to a snapshot and then reseed all the databases across. All in all everything seemed fine, the server works and CCR copy status is running fine. I used the "Delete All" option this morning to remove the snapshot and the process according to vCenter has completed with no errors and no "Needs Consolidation" flag. This all seems fine until I check the Datastore that holds the VM on our SAN and I can clearly see snapshots that are pretty big [see attached image]. These do not seem to be changing size and the data modified is around the time the works were started for the vmtools update. Does this possibly mean that at some stage, possibly during reversion or hard resetting of the VM, that they have become orphaned? Are there any methods to check orphaned status of snapshots? We are running ESXi5.0 Update 1 with storage provide by an EMC SAN. Enterprise plus is the license level.

    Read the article

  • How to make sure Windows PC is reasonably secure?

    - by Coder
    I'm not much of a network and network security expert, but I need to add an existing Windows PC to a network with always on connection. The problem is, I have no idea if the PC is really clean, and, actually, no knowledge to check it. I scanned the PC with Process Explorer to verify if all running processes are signed, ran an AVG scan, but this is where my knowledge ends. IIRC, there can be bad code attached to svchost or something, bad drivers, and so on, but I have no idea how to check all those things. Reformatting the PC is unfeasible as of now. Are there any suggestions on what I could do?

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Oracle@info360: Advance Beyond Point Solutions To An Enterprise Content Strategy

    - by kellsey.ruppel(at)oracle.com
    The info360/AIIM conference is March 22-24 in Washington DC. We have a number of customer speakers this year talking on the theme of “Advance Beyond Point Solutions To An Enterprise Content Strategy.” These customers all started by addressing a particular use case, but then used the infrastructure they had created to quickly and cost effectively stand up solutions to new business problems.  Andy MacMillan, VP of Product Management at Oracle, will give a thought provoking opening keynote at 8:50 AM on Tuesday, March 22nd. He will be joined by Juan Jose Goldschtein, the CIO of the Organization of American States. The OAS has developed a human rights website that is the front end to a case management system for human rights violations. The implementation supports digital signatures on iPads, so their executives can approve workflows and keep cases moving forward while they are busy traveling and investigating abuses.Other customer speakers include:Tom Robinette, Director of Applications and IT Engineering, Dresser-RandRobin Crisp, Program Manager, FDAMonica Crocker, Corporate Records Manager, Land O’ LakesBrian Skapura, The American Institute of ArchitectsKathy Adams and Leslie Becker, The Nature ConservancyIrfan Motiwala, Sr. VP, Moody’s Investment ServicesMolly Wenzler, Director of Electronic Media, MeadWestvaco Other sessions include our Super Session that kicks off the Oracle Track @info360 on Wednesday. At 11:00 AM, Senior Director of Product Marketing, Howard Beader will present The Social Enterprise – Combining People, Processes and Content. This session will focus on how customers have brought social media, business process management, and content management together to supercharge their organizations. Oracle customers can arrange one-on-one meetings with Oracle executives and product experts, and attend the VIP customer appreciation event. Oracle will be joined by Oracle partners:FujitsuKesteTeamInformaticsKapowSena SystemsDTIYou can learn more about discounts for Oracle customers and register on our Oracle@info360 page.To see more about the customers and sessions that will be presented, you can look at the Oracle Track page on the AIIM/info360 website.Technorati Tags: oracle, AIIM, info360, content management, social enterprise

    Read the article

  • Give root password for maintenance

    - by Jevgeni Smirnov
    After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done INIT: Going single user INIT: Sending processes the TERM signal INIT: Sending processes the KILL signal Give root password for maintenance(or.... I press Ctrl+D, and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover I'v managed to poweroff server through several retries. Recently i'v installed only ntp and ntpdate. Nothing more. I suppose it might be it conflicting with iptables.

    Read the article

  • ApplicationPool in IIS 7.5 crashes with identifty set to NetworkService

    - by Ravi
    We have a web application running on IIS 7.5 with the identity of the Custom Application Pool set to NetworkService. This was working fine for some days and now the application pool has gone in to a stopped state. The following error message is displayed in the Event Viewer Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: ntdll.dll, version: 6.1.7600.16559, time stamp: 0x4ba9b29c Exception code: 0xc0000005 Fault offset: 0x00038c19 Faulting process id: 0xa28 Faulting application start time: 0x01cbb2e5707aa2b2 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: ae3f0610-1ed8-11e0-abf8-000c297f918f We are able to start the application pool only after changing the identity to LocalSystem. Why does the application pool fails to run with identity set to NetworkService. Can any one help us resolving this issue?

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • If I re-key a SSL certificate for a 2nd/backup server, does the original still work?

    - by Matt
    We have a production server with a wildcard SSL certificate. I'm in the process of creating a backup/failover server that will host the same domains, and therefore will also need the SSL certificate. The certificate on the primary server was installed with the private key non-exportable, so I am unable to export the certificate for installation on the failover server. My question then is - if I re-key the certificate from Go Daddy, does the original certificate installed on the primary server cease to be valid? As an aside, the original (primary) server is IIS 6, the failover is IIS 7 (once the failover is operational, we'll likely upgrade the primary).

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Can't seem to disable Java Automatic Update

    - by sbussinger
    I'm just tweaking out my new Windows 7 laptop and wanted to disable the automatic Java updating (and thus kill the silly jusched.exe background process), but I can't seem to get it to actually turn it off. I found the Java Control Panel applet and found the settings on the Update tab that should control it. I can turn them off, apply them, and close the dialog successfully. But if I just open the dialog backup again right away, I see that the changes weren't actually made. I've tried it numerous times and it just doesn't take. What's up with that? I also tried to disable the icon in the system tray and got the same effect. Changing the size of the Temporary Internet Files cache work however. Any ideas? Thanks!

    Read the article

  • apt-get doesnt download files from NFS location

    - by Pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • Using Excel data in Microsoft Publisher

    - by TK
    I have never worked in Microsoft Publisher. To build the presentation we're having to input the same information from a microsoft excel master. For instance- My excel has these columns: Item Title, Item Description, Item Dimensions, Notes, Created Date From there, I'm having the RE-type the information underneath a picture of the item in powerpoint (or publisher) in order to present to the client. So I'm retyping the item name, description, dimensions, etc. I'm also reformatting slides each time I do this. I know there's a way to streamline this process, to build a powerpoint and/or something in publisher that will bring in the data needed based on a merge (or maybe macro), but I haven't been able to figure out how. Any suggestions?

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Torrent, ISA Server 2006 and packet dropped due to TCP_NOT_SYNC

    - by Pascal
    Hi, I'm trying to get uTorrent 2.0.4 to work in a DMZ machine, protected by a ISA Server 2006. I've opened 1 inbound port (via publishing) and opened all the higher ports for that specific machine that runs uTorrent on my DMZ, and it's working almost fine. The problem is that I keep getting packets dropped with 0xc0040017 FWX_E_TCP_NOT_SYN_PACKET_DROPPED. Is there any way to disable this via registry? Is there any way around this? The download speed fluctuates a lot, and when I starts hitting the upper limit that I've defined in uTorrent, the errors start poping up a lot, and the download speed goes way down, and the process repeats on and on Tks Edit My outbound rules are: Port Range: TCP 10000-65535 Outbound Port Range: UDP 10000-65535 Send Edit It's probably a bug handling requests from Windows 7. When I installed the uTorrent on a XP machine, the problem went away

    Read the article

  • How should I safely send bulk mail? [closed]

    - by Jerry Dodge
    First of all, we have a large software system we've developed and have a number of clients using it in their own environment. Each of them is responsible for using their own equipment and resources, we don't provide any services to share with them. We have introduced an automated email system which sends emails automatically via SMTP. Usually, it only sends around 10-20 emails a day, but it's very possible to send bulk email up to thousands of people in a single day. This of course requires a big haul of work, which isn't necessarily the problem. The issue arises when it comes to the SMTP server we're using. An email server is issued a number of relays a day, which is paid for. This isn't really necessarily the issue either. The risk is getting the email server blacklisted. It's inevitable, and we need to carefully take all this into consideration. As far as I can see, the ideal setup would be to have at least 50 IP addresses on multiple servers, each of which hosts its own SMTP server. When sending bulk email, it will divide them up across these servers, and each one will process its own queue. If one of those IP's gets blacklisted, it will be decommissioned and a new IP will replace it. Is there a better way that doesn't require us to invest in a large handful of servers? Perhaps a third party service which is meant exactly for this?

    Read the article

  • Monit can't detect MySQL, but I can

    - by Matchu
    Monit is configured to watch MySQL on localhost at port 3306. check process mysqld with pidfile /var/lib/mysql/li175-241.pid start program = "/etc/init.d/mysql start" stop program = "/etc/init.d/mysql stop" if failed port 3306 protocol mysql then restart if 5 restarts within 5 cycles then timeout My application, which is configured to connect to MySQL via localhost:3306, is running just fine and can access the database. I can even use MySQL Query Browser to connect to the database remotely via port 3306. The port is totally open and possible to connect to. Therefore, I'm pretty darn certain that it's running. However, running monit -v reveals that Monit cannot detect MySQL on that port. 'mysqld' failed, cannot open a connection to INET[localhost:3306] via TCP This happens consistently, until Monit decides not to track MySQL anymore, as configured. How can I begin to troubleshoot this issue?

    Read the article

  • Can we increase Torrent share ratio using Local Peer Discovery?

    - by Jagira
    I just want to know whether this is a flaw or not in Bittorrent system. Let us assume that I am member of a Private Torrent site which requires me to maintain a specific upload to download ratio. Will this work: I create a torrent of a large file say [ Fedora Linux ~ 4 GB ] and upload it to the tracker I download the same torrent using my ID and start it on another machine on LAN or a Virtual machine Both clients have Local Peer Discovery enabled, so they will find 'em [ not via DHT ] and start x'ferring data using LAN bandwidth at LAN speeds. Though both uploads and downloads will increase, my ratio will also increase If I reiterate the entire process 'n' times, the numerator in the "RATIO" i.e Upload will become so large that the effect of downloads on ratio will become less. I want to know whether this is legitimate???

    Read the article

  • I am transferring a namserver domain what do I need to update?

    - by Mech Software
    Perhaps I am totally over thinking this but I have a domain name and name servers that are working just fine. I want to transfer the one domain name that I have for my server which is also the name of the nameserver. e.g. mydomain.com with nameservers ns1.mydomain.com ns2.mydomain.com I am transfering the mydomain.com from the current registrar to the one I use for all my other domains. The question is what do I have to update? Once the transfer is complete mydomain.com will have ns1.mydomain.com and ns2.mydomain.com as it's nameservers as it is today. I was wondering though how ns1.mydomain.com and ns2.mydomain.com are resolving if mydomain.com is pointing to ns1 and ns2. Am I over thinking this or am I missing something in the process here? I always just enter the nameserver names when I configure any domains on my server. Do I have to setup A records somewhere for ns1 and ns2 ?

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >