Search Results

Search found 27684 results on 1108 pages for 'computer management'.

Page 172/1108 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Why doesn't my laptop battery charge while the laptop is in use?

    - by larryb82
    Up until a week ago, my laptop has always been able to charge the battery while I'm using it. Now, it will not charge unless the computer is sleeping, hibernating, or turned off. The icon in the start tray states that the battery is charging but it is not animated (it used to be) and of course the power level does not increase. Otherwise, the battery seems to be fine. The battery life is decent (2h+) and while the laptop is in use and plugged in the battery will maintain a constant charge. Any troubleshooting help would be great (i.e. is this a charger issue, battery issues, software issue, etc...)

    Read the article

  • Is it possible reinstall packages in Ubuntu without an internet connection?

    - by javamatt
    Hi everyone, While experiencing some massive problems with MYSQL, I completely removed a package called rsyslog, and I can no longer get on the internet to use the package manager to correct my mistake. I also got rid of librdf0 as well (oops). I would like to download the missing packages onto a CD with another computer, and manually reinstall them on my Ubuntu platform. Any ideas where to find these? (I am assuming this is the package I need. Either way, I still need to get access to the correct packages and install them). Thank you all very much in advance. Matt

    Read the article

  • View changelog of all packages to be upgraded before upgrading

    - by Stein G. Strindhaug
    When using synaptic on my Ubuntu desktop computer i can review all changelog of all the packages to be upgraded, and unselect a package for upgrade if I want. On my desktop I usually install everything, but I like to at least review what the changes are so that I can delay the upgrade if I suspect it could cause problems with the development tools I use. On a server (Ubuntu Server) with no x-server how can I do the same thing on the console: list all packages that will be updated (apt-get --dry-run upgrade does this along with a lot of noisy simulated install messages), view the changelog (if any) from last upgrade to the version it will be upgraded to. select which packages I want to ignore, or which I want to upgrade I've searched a lot for this but I haven't found anything, possibly I'm not using the correct terminology; but surely this must be possible. Synaptic must get it's info from some some low-level tool I assume? Complicated shellscripts are welcome too, if this is not already easily done with the existing tools.

    Read the article

  • Efficiently installing fully-patched Windows XP, IE, and Office 2007 on an isolated PC

    - by JPaget
    I have been tasked to install Windows XP, IE, and Office 2007 on a computer that will become part of a standalone network not connected to the Internet. What is a good way to install all of the security updates? I'm installing from CD's of Windows XP SP2 and MS Office 2007. Next I plan to download Windows XP SP3 and Office 2007 SP2, burn them to CD's, and install both service packs. Finally I plan to go to the Microsoft Download Center and download all applicable security updates, burn then to CD, and install them. I estimate that there are over 100 of these updates. Is there a more efficient way to do this?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Windows 7 Extend C Volume to Unallocated Space

    - by user327777
    a while back I installed Ubuntu and then later uninstalled it by I think deleting the partitions and recovering the windows 7 boot loader. I am not that experienced with partitioning yet. As you can see here there are two partitions that are now unallocated. The 9gb one is a recovery or something that came with the computer. How can I extend my C partition to use both of those? I do not want to have that much storage just wasted sitting there. Currently when I right click on C and hit extend the wizard pops up but there is no available space to extend. http://i.imgur.com/VxEkdyR.png http://i.imgur.com/DdFZWX9.png Thanks everyone!

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • How can you exclude folders from appearing in the Recent Items feature of Windows 7 start menu?

    - by Jordan Weinstein
    to be clear, I like the 'recent items' feature. I do not want to turn it off. I work at a law firm where we integrate Office with a document management system (DMS). If recent items are turned on, those DMS opened documents will show up in the recent items of a Windows 7 start menu when hovering over Word (or Excel\PPT etc). However the integration doesn't work correctly so if a user were to click on one of those, something wouldn't work right. In short, we've always needed to turn off Recent Items completely for a DMS integrated workstation. Curious if anyone knows of a way to exclude a directory from being "captured" so to speak. When you open a DMS document, the file gets copied to local directory where it saves it as you work, until you close and it checks it back in to the DMS. I'd like to be able to exclude that local directory from recent items. so local files in My Docs and Desktop would show up in recent items, but not DMS opened documents. Hope this makes sense.

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • Objective-measures of the power of programming languages

    - by Casebash
    Are there any objective measures for measuring the power of programming languages? Turing-completeness is one, but it is not particularly discriminating. I also remember there being a few others measures of power which are more limited versions (like finite-state-autonoma), but is there any objective measure that is more powerful?

    Read the article

  • Software/IT security training and certificate

    - by 5YrsLaterDBA
    I am thinking about attending software security training and getting software security certificate. (or IT security in general.) I am in MA Boston area. I am new in software security field and need to know this field for current project and/or future job. Any suggestion about the training and certificate? thanks, EDIT: How about this course and certificate? http://scpd.stanford.edu/public/category/courseCategoryCertificateProfile.do?method=load&from=courseprofile&certificateId=3575647#searchResults

    Read the article

  • Runtime unhandled exception while executing facedetect.py in opencv

    - by Rupesh Chavan
    When i tried to execute facedetect.py python script from opencv sample example i got the following runtime exception. Can someone please give me some pointer or some clue about the exception and why it is encountering? Here is the stack trace : 'python.exe': Loaded 'C:\Python26\python.exe' 'python.exe': Loaded 'C:\WINDOWS\system32\ntdll.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\kernel32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\python26.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\user32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\gdi32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\advapi32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\rpcrt4.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\shell32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\shlwapi.dll' 'python.exe': Loaded 'C:\WINDOWS\WinSxS \x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_6f74963e\msvcr90.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\imm32.dll' 'python.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.2982_x-ww_ac3f9c03\comctl32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\comctl32.dll' 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_cv.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libcv200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libcxcore200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_ml.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libml200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_highgui.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libhighgui200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\WINDOWS\system32\ole32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\oleaut32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\avicap32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\winmm.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\version.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msvfw32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\avifil32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msacm32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msctf.dll' 'python.exe': Loaded 'C:\OpenCV2.0\bin\libopencv_ffmpeg200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\WINDOWS\system32\wsock32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\ws2_32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\ws2help.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\MSCTFIME.IME' Unhandled exception at 0x00e7e4e4 in python.exe: 0xC0000005: Access violation reading location 0xffffffff. Thanks a lot in advance, Rupesh Chavan

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >