Search Results

Search found 9380 results on 376 pages for 'report definition'.

Page 196/376 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Start tomcat webapp with root privileges

    - by Hagay Myr
    I built a webapp that uses libpcap (via jpcap). In order to be able to get the network interfaces list or to bind to a network interface, the application (in this case a webaap that runs from tomcat server) must be running with root privileges. During development I simply ran Eclipse with root privileges (sudo eclipse) and my webapp worked just fine with Eclipse's local tomcat server. However, when I try to deploy my webapp to the "real" tomcat server, it isn't working. I Also tried to start the tomcat6 service with sudo and changed the TOMCAT6_USER definition (defined in /etc/init.d/tomcat6) from "tomcat6" to "root" but it made no difference. What should I do to make it work?

    Read the article

  • AWS EC2: How to determine whether my EC2/scalr AMI was hacked? What to do to secure it?

    - by Niro
    I received notification from Amazon that my instance tried to hack another server. there was no additional information besides log dump: Original report: Destination IPs: Destination Ports: Destination URLs: Abuse Time: Sun May 16 10:13:00 UTC 2010 NTP: N Log Extract: External 184.xxx.yyy.zzz, 11.842.000 packets/300s (39.473 packets/s), 5 flows/300s (0 flows/s), 0,320 GByte/300s (8 MBit/s) (184.xxx.yyy.zzz is my instance ip) How can I tell whether someone has penetrated my instance? What are the steps I should take to make sure my instance is clean and safe to use? Is there some intrusion detection techinque or log that I can use? Any information is highly appreciated.

    Read the article

  • Massive Crawling requests from Google Apps Engine useragent

    - by SilentPlayer
    Hi friends, I'm badly affected with 'Google AppEngine-Google' UserAgent.. receiving 5/6 requests per second on http server. This bot is crawling my site just like GoogleBot does. Following is the sample of url in my access logs. 72.14.192.3 - - [19/May/2010:01:27:06 +0000] "GET /some-url/etc-123.htm HTTP/1.1" 200 4707 "-" "AppEngine-Google; (+http://code.google.com/appengine; appid: harpy000)" I have checked the ip address it is registered with Google Inc. Can anyone tell me where i can report Abuse to Google Inc. Or any information about this issue. Thank you!

    Read the article

  • Ubuntu 10.04: unable to login after fresh install

    - by Richard
    Hello All, I,ve just installed a fresh copy of Ubunutu 10.04, downloaded a couple of days ago. The installation seemed to go fine. However I can't log in: the login screen just seems to reset and asks me for my password again. It's not an authentication / incorrect password issue. If I stick in a wrong password, I get "Authentication failure". I've googled around, others report the same issue on the Ubuntu forums, but there doesn't seem to be a fix. Does anyone know of a work around or what the problem is? Have 9.10, I might end up just installing that instead. THanks

    Read the article

  • Using Bazaar (BZR) on AFP or SMB mounted server not wroking

    - by Dan Berlyoung
    Has anyone been able to get BZR working on a mounted AFT or SMB mounted share? I've tried both (The AFP volume is actually coming off an Xserver.) and neither work. I have BZR 2.0.0 and am running it on a Mac with 10.5. I keep getting an error like this bzr: ERROR: Could not acquire lock "/Volumes/joeserver/Documents/bzr/remote_test/.bzr/checkout/dirstate": [Errno 45] Operation not supported I googled around a bit but only found a fairly stale (2007) bug report on launchad.com (Bug #313625 to be specific.) Any ideas?

    Read the article

  • How to configure autofs5 timeout on per-filesystem basis?

    - by Norman Ramsey
    Because of a show-stopping bug in Debian autofs 4, I just upgraded to autofs5. It is not honoring the timeout option in my auto.master file: /var/autofs/removable /etc/auto.removable --timeout=2 I use this map for thumb drives and so on; I don't want a general default timeout of 2 seconds. I did some digging and although the --timeout option worked in autofs 4, and it appears in some examples on the Web, it is not actually sanctioned (or even mentioned) in the documentation for the auto.master file. So I don't feel I can report the problem as a bug. How can I get autofs5 to timeout after 2 seconds only on designated filesystems? Update: I am using a Debian-packaged autofs5, version 5.0.4-3.2.

    Read the article

  • Ubuntu 9.10: switch in KVM, mouse is detected but not usable immediately

    - by CarlF
    I use a KVM switch to jump between my tower and laptop, both placed on my desk. With Ubuntu 9.04 this worked perfectly. In 9.10, when I switch to the tower, then back to the laptop, the mouse is detected (as shown by /var/log/messages) but moving it has no effect. If I use Ctrl-Alt-F1 to switch to the TTY, then Alt-F7 back to Xorg, the mouse starts working. The tower is running Windows 7, but that shouldn't matter. Sometimes, but not other times, the USB keyboard on the KVM switch is also not usable and I have to use the laptop's built-in keyboard to switch to the TTY. The laptop has two monitors, the built-in and an external. It's (obviously) the external panel that is attached via the KVM switch. Any suggestions? Report to Canonical as a bug?

    Read the article

  • Cannot move Google Chrome tabs to different monitor in ubuntu

    - by Moses
    When I use Chrome in Windows, I can grab a tab and drag it over to my second monitor and it will automatically maximize. In ubuntu, when I try to do the same, the tab will snap back to my primary monitor and maximize there. There is a related bug report for Chromium that is labeled "fixed" here, but when I try using Chromium instead of Chrome, I get the same behavior (not what I called fixed). So far the only other thing I've tried is changing themes, resetting Chrome's settings to default and tested Chromium as I pointed out above. I've also tried installing Chrome from a live Linux disc, and the behavior is still the same. The issue does not occur with Firefox, but I need to use Chrome for work-related reasons, so simply switching isn't really an option personally (though it is a viable workaround). I'm using Chrome 30 and ubuntu 13.04. Does anyone have any solutions?

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • MSA20 RAID5 recovery failure due to URE on another disk

    - by Andrey
    I have MSA20 with one disk array on 12 disks and 3 LUNs on it (each raid 5). A few days ago one disk in one of the LUNs was failed and I replaced it. But raid5 recovering failed at 13% and I see in ADU report that one of the disk has "Errors Logged = 5566" and according SCSI specifications it is URE (Sense Code=0x11, Qualifier=0x00). In serial log I also see URE error. It seems that Raid5 can't be rebuilt because of this. So I have a few questions: Is there a way to recover raid5 still? If I leave new disk that was replaced and remove disk with URE, will other LUNs be destroyed or just failed LUN? If all LUNs will fail what is the sense to make each LUN with own raid on one disk group array if 2 failed disk can destroy all? As I understand the preferred way is to create one disk array for one LUN in future and not one array with few LUNs? Thanks.

    Read the article

  • How to stop ethernet interface in bridge configuration from obtaining IP address via DHCP

    - by user71061
    Hi! I'm trying to configure openvpn in bridging configuration. First step of doing this requires creating bridge interface (br0), bridging together physical ethernet interface (eth0) and logical tap0 interface. This can be done with simple script but I want to use less popular approach, configuring bridge interface entirely via /etc/network/interfaces file (on Debian linux). So I have removed all eth0 definitions form /etc/network/interfaces and replaced if with following br0 definition: auto br0 iface br0 inet static pre-up openvpn --mktun --dev tap0 address 10.0.0.1 netmask 255.255.255.0 bridge_ports eth0 tap0 post-down openvpn --rmtun --dev tap0 This works as I expected, but there is only one problem: interface eth0 is part of bridge interface br0 AND it also receive it's own IP address from my DHCP server (located on same LAN where eth0 is connected). My questions is: how to stop eth0 interface from obtaining it's own IP address? (It should only be part of br0 bridge).

    Read the article

  • What is the best way to run ClamAV on Windows Server 2008 R2

    - by gabbsmo
    I'm hosting a Wordpress-site on Windows Server 2008 RS and want to scan all files that are uploaded by users for viruses using this plugin http://wordpress.org/extend/plugins/upload-scanner/. I'm on a really tight budget (no profit) so ClamAV seem like a good choice. What is the best way to run ClamAV under these circumstances? I'm concidering the following options: Just running the raw windows build from http://sourceforge.net/projects/clamav/ an setup definition updates with task scheduler. Any way to automate updates of the scanner (binaries)? Using a "distro" like ClamWin or Immunet (advertised on clamav.net). Any suggestions are welcome.

    Read the article

  • environment variable issue in shell

    - by George2
    I am using Red Hat Linux Enterprise 5. I know the theory that -- using export to set environment variable, the environment variable will apply to current and child environment, but without using export to set environment variable, the environment variable will only apply to the current environment. My confusion is, what is the exact definition of "child environment" and "current environment"? For example, $ var1=123 $ echo "Hello [$var1]" the value of var1 (which is 123) is printed in shell, but I think echo is a command invoked by current shell, and it (the echo command) should be a child environment of current shell and the value of var1 should not (because not using export var1=123) impact echo. Any comments? Thanks in advance!

    Read the article

  • HP DL380 G5 Predictive Drive Failure on a new drive

    - by CharlieJ
    Consolidated Error Report: Controller: Smart Array P400 in slot 3 Device: Physical Drive 1I:1:1 Message: Predictive failure. We have an HP DL380 G5 server with two 72GB 15k SAS drives configured in RAID1. A couple weeks ago, the server reported a drive failure on Drive 1. We replaced the drive with a brand new HDD -- same spares number. A few days ago, the server started reporting a predictive drive failure on the new drive, in the same bay. Is it likely the new drive is bad... or more likely we have a bay failure problem? This is a production server, so any advice would be appreciated. I have another spare drive, so I can hot swap it if this is a fluke and new drive is just bad. THANKS! CharlieJ

    Read the article

  • BSOD > Help interpret this crash? [DMP file]

    - by feed_me_code
    System just crashed, below here is event view details.. Been running GREAT for last few days.. First crash since.. DMP File link below.. Can someone please help interpret this crash? Thanks The computer has rebooted from a bugcheck. The bugcheck was: 0x000000d1 (0x00001000000102fd, 0x0000000000000002, 0x0000000000000000, 0xfffff88010143afd). A dump was saved in: C:\Windows\Minidump\052714-23010-01.dmp. Report Id: 052714-23010-01. https://www.dropbox.com/s/ag19ejkrddnjjct/052714-23010-01.dmp

    Read the article

  • Snow Leopard connecting to Unbuntu 10.04 through Samba failure -- need help fixing.

    - by Chris Altman
    I have a Ubuntu 10.04 web server. I want to connect to it with my OSX 10.6 machine and Finder. I have installed openSSH and Samba on the Ubuntu machine. In my smb.conf I have a Share Definition: [www] comment = Development Computer WWW path = /var/www writeable = yes browseable = yes allow hosts = 192.168.1. I can connect to the machine through Finder using a non-root user. When I attempt to add files thought Finder I get an "Insufficient Permissions" error. Please help. I am not sure if the issue is in the Samba configuration or OSX 10.6 Thank you

    Read the article

  • Need to upgrade DDR2 RAM on HP Desktop

    - by jds
    I have this HP Pavilion Desktop. As you can see, that page says the memory speed supported is PC2-4200. It currently has a 512 MB stick - CPU-Z Screenshots: hxxp://i41.tinypic.com/j5clj6.jpg and hxxp://i39.tinypic.com/20tldlc.jpg However, a crucial.com scan gives a slightly different report - hxxp://crucial.com/systemscanner/viewscanbyid.aspx?id=5718CFE831D926C3 It says the system can support PC2-5300 memory. So my question is which one should I trust? I want to upgrade the computer's ram to 2 GB (the maximum supported), because XP Media Center is giving me problems and I will install Windows 7 on this. PC2-6400 is the most common DDR2 memory I have been able to find here in the market. Will it cause any problems if I install 2 × 1 GB PC2-6400 DDR2 memory sticks (in dual channel) in this computer, (afaik, it will just run at the lower speed of 533 MHz, or whatever the motherboard supports), or do I absolutely need to get PC2-4200 sticks?

    Read the article

  • Routing traffic to specific web sites through Ethernet, rest via wifi on Mac OS X 10.6?

    - by user32448
    Hi I have two separate Internet connections connected to a Mac and I'd like one of them (via Ethernet eth0 gateway 192.168.2.1) to serve for just backing up to an remote online storage, and the other one (via Airport en1 gateway 192.168.1.1) for all other Internet traffic. I tried using "route" from the terminal as follows: sudo route add -host 98.207.226.113 -interface eth0 (just for testing against the site www.whatismyip.org whose IP is 98.207.226.113, to see through which gateway the traffic is routed) I can see using netstat that the route is added: $ netstat -rn -f inet Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.1.1 UGSc 49 0 en1 98.207.226.113 192.168.2.1 UGSc 0 0 eth0 However, the traffic in this case does NOT get routed properly through Ethernet, as if the routing definition I made is ignored. Any ideas? Thanks!

    Read the article

  • Outlook new message size nearly 1mb

    - by Yossi Dahan
    I've been using Outlook 2010 for several weeks with no issues. Suddently, a few days ago, the size of my outgoing messages got huge. Looking at thsi it appeas that a huge CSS style is beign created with around 14,000 definition for list items, making the message almost 1mb before I even typed in one word. Emails before that point were very small. Needless to say I can't remember changing anything, nor can anyone around here provide any possible explanation... Any ideas?

    Read the article

  • Send mail from a distrobution groups email adress

    - by Campo
    A user has send permission on a distro group on a WINDOWS SERVER 2003 domain. I am the admin. When either of us send email using the distrobution groups email adress we get a non delivery report Your message did not reach some or all of the intended recipients. Subject: TEST Sent: 4/19/2010 4:46 PM The following recipient(s) cannot be reached: [email protected] on 4/19/2010 4:46 PM You do not have permission to send to this recipient. For assistance, contact your system administrator. MSEXCH:MSExchangeIS:/DC=local/DC=DOMAIN:SERVERNAME Thanks, JC

    Read the article

  • Steps to make sure network is not blacklisted...Again

    - by msindle
    I have an interesting issue. I have a client that just got blacklisted due to spam being sent out over the last 2 days. I have my firewall configured to only allow mail to go outbound on port 25 from our mail server (Exchange 2010) exclusively and I have verified that there are no open relay's on our transport rules. We are running Vipre Business and after running deep scans with updated definitions all computers come back clean. I ran a message tracking report on our Exchange server that shows all mail sent via the mail server over the last couple of weeks and didn't see anything malicious or out of the ordinary. I have also verified that there are no home devices or rouge computers on the network. For all practical purposes it appears that the network is clean, but we still wound up on 5 or 6 blacklists...Where should I start looking next? Is there a "best practices" guide that can help eradicate this issue? Thanks in advance! msindle

    Read the article

  • C# - Cannot implicitly convert type List<Product> to List<IProduct>

    - by Keith Barrows
    I have a project with all my Interface definitions: RivWorks.Interfaces I have a project where I define concrete implmentations: RivWorks.DTO I've done this hundreds of times before but for some reason I am getting this error now: Cannot implicitly convert type 'System.Collections.Generic.List<RivWorks.DTO.Product>' to 'System.Collections.Generic.List<RivWorks.Interfaces.DataContracts.IProduct>' Interface definition (shortened): namespace RivWorks.Interfaces.DataContracts { public interface IProduct { [XmlElement] [DataMember(Name = "ID", Order = 0)] Guid ProductID { get; set; } [XmlElement] [DataMember(Name = "altID", Order = 1)] long alternateProductID { get; set; } [XmlElement] [DataMember(Name = "CompanyId", Order = 2)] Guid CompanyId { get; set; } ... } } Concrete class definition (shortened): namespace RivWorks.DTO { [DataContract(Name = "Product", Namespace = "http://rivworks.com/DataContracts/2009/01/15")] public class Product : IProduct { #region Constructors public Product() { } public Product(Guid ProductID) { Initialize(ProductID); } public Product(string SKU, Guid CompanyID) { using (RivEntities _dbRiv = new RivWorksStore(stores.RivConnString).NegotiationEntities()) { model.Product rivProduct = _dbRiv.Product.Where(a => a.SKU == SKU && a.Company.CompanyId == CompanyID).FirstOrDefault(); if (rivProduct != null) Initialize(rivProduct.ProductId); } } #endregion #region Private Methods private void Initialize(Guid ProductID) { using (RivEntities _dbRiv = new RivWorksStore(stores.RivConnString).NegotiationEntities()) { var localProduct = _dbRiv.Product.Include("Company").Where(a => a.ProductId == ProductID).FirstOrDefault(); if (localProduct != null) { var companyDetails = _dbRiv.vwCompanyDetails.Where(a => a.CompanyId == localProduct.Company.CompanyId).FirstOrDefault(); if (companyDetails != null) { if (localProduct.alternateProductID != null && localProduct.alternateProductID > 0) { using (FeedsEntities _dbFeed = new FeedStoreReadOnly(stores.FeedConnString).ReadOnlyEntities()) { var feedProduct = _dbFeed.AutoWithImage.Where(a => a.ClientID == companyDetails.ClientID && a.AutoID == localProduct.alternateProductID).FirstOrDefault(); if (companyDetails.useZeroGspPath.Value || feedProduct.GuaranteedSalePrice > 0) // kab: 2010.04.07 - new rules... PopulateProduct(feedProduct, localProduct, companyDetails); } } else { if (companyDetails.useZeroGspPath.Value || localProduct.LowestPrice > 0) // kab: 2010.04.07 - new rules... PopulateProduct(localProduct, companyDetails); } } } } } private void PopulateProduct(RivWorks.Model.Entities.Product product, RivWorks.Model.Entities.vwCompanyDetails RivCompany) { this.ProductID = product.ProductId; if (product.alternateProductID != null) this.alternateProductID = product.alternateProductID.Value; this.BackgroundColor = product.BackgroundColor; ... } private void PopulateProduct(RivWorks.Model.Entities.AutoWithImage feedProduct, RivWorks.Model.Entities.Product rivProduct, RivWorks.Model.Entities.vwCompanyDetails RivCompany) { this.alternateProductID = feedProduct.AutoID; this.BackgroundColor = Helpers.Product.GetCorrectValue(RivCompany.defaultBackgroundColor, rivProduct.BackgroundColor); ... } #endregion #region IProduct Members public Guid ProductID { get; set; } public long alternateProductID { get; set; } public Guid CompanyId { get; set; } ... #endregion } } In another class I have: using dto = RivWorks.DTO; using contracts = RivWorks.Interfaces.DataContracts; ... public static List<contracts.IProduct> Get(Guid companyID) { List<contracts.IProduct> myList = new List<dto.Product>(); ... Any ideas why this might be happening? (And I am sure it is something trivially simple!)

    Read the article

  • Text comparison utility

    - by Aaron
    I know this has been asked before...but I have a spin as I have been trying out varying free software offerings. I want to rid out department of DiffDoc the problem is that I am having trouble locating something that will do what we need. WinMerge has been the latest attempt... The problem is simple. One Word doc...one PDF with a portion of it containing the text to be compared against. Compare them and be done. Raw text, ignore whitespace, ignore carriage returns, etc... Just compare the text and give me the results in some sort of report. NOTE: Have tried ExamDiff, kdiff3, Tortoise, and a few others...

    Read the article

  • What is a good tool to scan a FTP directory and show disk usage visually ala KDirStat/WinDirStat?

    - by Wesley 'Nonapeptide'
    Is there a tool that can scan an FTP directory and build a visual representation of disk usage? I'm running on Windows so that platform is my preference for this tool, but a *NIX tool would also be useful. I'm thinking along the lines of WinDirStat, KDirStat and TreeSize. At first I thought WinDirStat might be able to scan an FTP directory, but it was not so. FOSS is a plus, but not a requirement (I'm not against paying for good software). I'd like to also have a simple report on how many of what types of files are present, largest files, etc. Much like the simple file type reporting in *DirStat.

    Read the article

  • Windows 7, files reappear after deletion.

    - by HeavyWave
    I'm trying to delete some files from a folder. I've taken ownership of the files and the folder. When I delete these files Windows doesn't report any errors and deletes them. BUT, after I press F5 these files reappear again. There are no messages whatsoever, they are just undeletable. I know login off will help, but how do I fix it without going through the pain of closing everything down? P.S. Files disappear from the folder after aprox. 5 minutes. Update. Turns out my version of Windows did not properly upgrade from test version, so it had some weird disk drive issues.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >