Search Results

Search found 36111 results on 1445 pages for 'mysql update'.

Page 971/1445 | < Previous Page | 967 968 969 970 971 972 973 974 975 976 977 978  | Next Page >

  • My laptop causes the wireless router to stop working

    - by Pier
    hello all, it's happening something strange to my wireless connection. The wireless connection works fine with all the devices at home, except with my laptop (Toshiba satellite, WIN 7 64): as I start it, the wireless network shuts off and the connection disappears also for the other devices. I can use internet only plugging the laptop with an ethernet cable, and I have to shut off the wireless card, to allow the other people in the house to use the wireless connection. I tried to update the firmware (it is a D-Link DIR 615) but it's up to date, and reconfigure the router, but all indicates that the problem is caused by my laptop. I'm currently doing an antivirus scanning, but the computer seems to be clean. Any suggestion?

    Read the article

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

  • Chroot with CentOS 5.3 + openssh 4.3p2

    - by Scud
    OS: CentOS 5.3, with openssh 4.3p2 Trying to set 'chroot' in ssh shell, but openssh version prior to 4.8 doesn't take below settings. yum update openssh open up to version 4.3 which is quite old. Doesn't CentOS support openssh 4.8 or up? If that's the case, how to set chroot with openssh 4.3? or is it better to just using FTP? My purpose is limit SFTP or FTP access to certain folder, not root folder. Thanks! Match group sftponly ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp

    Read the article

  • Domain Controllers group not reflected in domain controllers credentials

    - by Molotch
    I set up a small testlab in vbox consisting of four servers. Two domain controllers dc01, dc02, one offline root ca and one online enterprise sub ca, ca01. All servers are based on Windows Server 2008 R2 Standard. Everything works as excpected except one thing. If I issue a certificate template with read, enroll and autoenroll rights to the security group "domain controllers" it does not let dc01 or dc02 to enumerate or enroll for the certificate. I've restarted both domain controllers several times to update their credential tokens with the correct group memberhips. So I added dc01 to the "domain computers" group and gave that group read, enroll and autoenroll rights in the template, bam, the certificate was issued. So my question is, why isn't the domain controllers group memberhips reflected in the domain controllers (dc01 and dc02) credentials? Can I view the computers credentials somehow and how should I go about trying to resolve the issue?

    Read the article

  • Unable to install Windows Installer 4.5 on Windows Server 2008

    - by Tim Trout
    One of the prerequisites for installing SQL Server 2008 Express R2 is Windows Installer 4.5. I have a couple of WS2008 machines I'm prepping, so I downloaded the appropriate version of the file (Windows6.0-KB942288-v2-x86.msu) to our file server and tried isntalling it on both machines. On both machines I get the cryptic 0x80070003 error: "the system cannot find the file specified", but it does not show which file it cannot find. I don't get this when I try to install the Windows XP version on one of my XP machines. Any clues as to what I might be missing or doing wrong? One Technet help forum suggested I try installing the "System Update Readiness Tool", but this installer also fails for the same error code.

    Read the article

  • Problems in "Save as PDF" plugin with Arabic numbers

    - by Mohamed Mohsen
    I use the "Save as PDF" plugin with Word 2007 to generate a PDF document from a DOCX document. It works great except that the Arabic numbers in the Word file have been converted to English numbers in the PDF document. Kindly find two links containing two screen shots explaining the problem. The first image is the generated PDF file with the English numbers highlighted. The second image is the original word file with the Arabic numbers highlighted. Update: Thanks very much Isaac, ChrisF and Wil. I changed the Numeral at word to Context and confirmed that all the numbers are Arabic at the Word file. I still have the problem as the PDF file still have English numbers. (Note: The Arabic numbers called Hindi numbers). I also tried changing the font to Tahoma with no hope.

    Read the article

  • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API

    - by Roger Brinkley
    Tweet Recorded live at Jfokus 2012, an interview with Greg Luck on JSR 107 Java Temporary Caching API. Joining us this week on the Java All Star Developer Panel is Alexis Moussine-Pouchkine, Java EE Developer Advocate. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JavaOne 2012 call for papers is open (closes April 9th) LightFish, Adam Bien's lightweight telemetry application Java EE 6 sample code JavaFX 1.2 and 1.3 EOL Repeating Annotations in the Works Events March 26-29, EclipseCon, Reston, USA March 27, Virtual Developer Days - Java (Asia Pacific (English)),9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT  / 3.00pm - 7.30pm AEDT April 4-5, JavaOne Japan, Tokyo, Japan April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India Feature Interview Greg Luck founded Ehcache in 2003. He regularly speaks at conferences, writes and codes. He has also founded and maintains the JPam and Spnego open source projects, which are security focused. Prior to joining Terracotta in 2009, Greg was Chief Architect at Wotif.com where he provided technical leadership as the company went from a single product startup to a billion dollar public company with multiple product lines. Before that Greg was a consultant for ThoughtWorks with engagements in the US and Australia in the travel, health care, geospatial, banking and insurance industries. Before doing programming, Greg managed IT. He was CIO at Virgin Blue, Tempo Services, Stamford Hotels and Resorts and Australian Resorts. He is a Chartered Accountant, and spent 7 years with KPMG in small business and insolvency. Mail Bag What’s Cool RT @harkje: To update an earlier tweet: #JavaFX feels like Swing with added convenience methods, better looking widgets, nice effects and...

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Launchpad autobuild fails when trying to access Maven component

    - by Jauder Ho
    I'm trying to build thrift as a package for the first time on Launchpad and it is failing. It appears that it is failing access repo2.maven.org for some reason. It may be that there is some access restriction but I'm not sure. I will note that the package successfully built locally. Link to build log. Search for the string "repo2.maven.org" and you will see that there is a java.net.UnknownHostException. The repo packages are here. PS. These packages are an update to the thrift package as packaged by the txAMQP team using v0.2.0 of thrift.

    Read the article

  • Howto WCF Service HTTPS Binding and Endpoint Configuration in IIS with Load Balancer?

    - by Mike G
    We have a WCF service that is being hosted on a set of 12 machines. There is a load balancer that is a gateway to these machines. Now the site is setup as SSL; as in a user accesses it through using an URL with https. I know this much, the URL that addresses the site is https, but none of the servers has a https binding or is setup to require SSL. This leads me to believe that the load balancer handles the https and the connection from the balancer to the servers are unencrypted (this takes place behind the firewall so no biggie there). The problem we're having is that when a Silverlight client tries to access a WCF service it is getting a "Not Found" error. I've set up a test site along with our developer machines and have made sure that the bindings and endpoints in the web.config work with the client. It seems to be the case in the production environment that we get this error. Is there anything wrong with the following web.config? Should we be setting up how https is handled in a different manner? We're at a loss on this currently since I've tried every programmatic solution with endpoints and bindings. None of the solutions I have found deal with a load balancer in the manner we're dealing. Web.config service model info: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureCRMCustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> <binding name="SecureAACustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> </customBinding> <mexHttpsBinding> <binding name="SecureMex" /> </mexHttpsBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <!--Defines the services to be used in the application--> <services> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior" name="TradePMR.OMS.Framework.Services.CRM.CRMService"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureCRMCustomBinding" contract="TradePMR.OMS.Framework.Services.CRM.CRMService" name="SecureCRMEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior" name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureAACustomBinding" contract="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation" name="SecureAAEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> </configuration> The ServiceReferences.ClientConfig looks like this: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="StandardAAEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureAAEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="StandardCRMEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureCRMEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <client> <endpoint address="https://Service2.svc" binding="customBinding" bindingConfiguration="SecureAAEndpoint" contract="AccountAggregationService.AccountAggregation" name="SecureAAEndpoint" /> <endpoint address="https://Service1.svc" binding="customBinding" bindingConfiguration="SecureCRMEndpoint" contract="CRMService.CRMService" name="SecureCRMEndpoint" /> </client> </system.serviceModel> </configuration> (The addresses are of no consequence since those are dynamically built so that they will point to a dev's machine or to the production server)

    Read the article

  • Join our webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Data integration team has organized a series of webcasts for this summer. We are kicking it off this Thursday June 30th at 10am PT with a product update webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate. In this webcast you will hear from product management about the new patch updates to both GoldenGate 11g R1 and ODI 11gR1. Jeff Pollock, Sr. Director of Product Management for ODI will talk about the new features in Oracle Data Integrator 11.1.1.5, including the data lineage integration with OBI EE, enhanced web services to support flexible architectures as well as capabilities for efficient object execution such as Load Plans. Jeff will discuss support for complex files and performance enhancements. Chris McAllister, Sr. Director of Product Management for Oracle GoldenGate will cover the new features of Oracle GoldenGate 11.1.1.1 such as increased data security by supporting Oracle Database Advanced Security option, deeper integration with Oracle Database, and the expanded list of heterogeneous databases GoldenGate supports . Chris will also talk about the new Oracle GoldenGate 11gR1 release for HP NonStop platform and will provide information on our strategic direction for product development. Join us this Thursday at 10am PT/ 1pm ET to hear directly from Data Integration Product Management . You can register here for the June 30th webcast as well as for the upcoming ones in our summer webcast series.

    Read the article

  • Does a site's bounce rate influence Google rankings?

    - by Joel Spolsky
    Does Google consider bounce rate or something similar in ranking sites? Background: here at Stack Exchange we noticed that the latest Google algorithm changes resulted in about a 20% dip in traffic to Server Fault (and a much smaller dip in traffic to Super User). Stack Overflow traffic was not affected. There was an article on WebProNews which hypothesized that bounce rate might be a ranking signal in Google's latest Panda update. According to Google Analytics, these are our bounce rates over the last month: Site Bounce Rate Avg Time on Site ------------- ----------- ---------------- SuperUser 84.67% 01:16 ServerFault 83.76% 00:53 Stack Overflow 63.63% 04:12 Now, technically, Google has no way to know the bounce rate. If you go to Google, search for something, and click on the first result, Google can't tell the difference between: a user who turns off their computer a user who goes to a completely different web site a user who spends hours clicking around on the website they landed on What Google does know is how long it takes the user to come back to Google and do another search. According to the book In The Plex (page 47), Google distinguishes between what they call "short clicks" and "long clicks": A short click is a search where the user quickly comes back to Google and does another search. Google interprets this as a signal that the first search results were unsatisfactory. A long click is a search where the user doesn't search again for a long time. The book says that Google uses this information internally, to judge the quality of their own algorithms. It also said that short click data in which someone retypes a slight variation of the search is used to fuel the "Did you mean...?" spell checking algorithm. So, my hypothesis is that Google has recently decided to use long click rates as a signal of a high quality site. Does anyone have any evidence of this? Have you seen any high-bounce-rate sites which lost traffic (or vice-versa)?

    Read the article

  • Proliant ML350 Won't Load Windows

    - by Mike
    I have a HP Proliant ML350 G4 running Windows 2003 Server Std Edition that will not load Windows on boot. It gave me a BSOD while running a recent Windows update and has not been able to boot since. The BSOD was a generic error, "Hardware Fault. Contact vendor." I am not getting any errors when the machine boots--it just hangs with a black screen when Windows starts to load. The machine will not boot into safe mode either--it gets stuck after beginning to load drivers. When trying to load the Windows recovery console I get the message that Windows cannot find any hard disks on the machine. This, despite having to load the driver for my storage controller. The HP Smart Start diagnostics finds no problems with my hard disks nor anything else. What should I try next? Mainly, I just want to be able to pull the data off my hard disks.

    Read the article

  • Google Contacts and Mac OS X Address Book

    - by dyve
    The sync that Google offers for Google Contacts < Mac OS X Address Book is seriously flawed. It doesn't sync automatically, it doesn't sync all contacts, it doesn't sync all fields. See here for a list of warnings and issues. Is there a better way to sync Google Contacts to Mac Address Book bidirectionally? Preferably free, and preferably without adding extra software. I have tried to do it through Plaxo (which has an excellent Mac Address Book sync, albeit through an extra software install), but Plaxo doesn't handle Google Sync well (no updates). UPDATE: For the new Mac OS X Snow Leopard this shouldn't be an issue. This question is looking for a Leopard answer.

    Read the article

  • Re-Calibrate Toshiba battery meter with Windows 7

    - by Mark Henderson
    I have a Toshiba U300 laptop. Back in the olden days (circa-2000) you could re-calibrate the battery meter from the BIOS when your battery was no longer reading accurately. My battery was shot, so I bought a brand new one, however Windows 7 still seems to count down the battery life in at the same speed as the old one did. This time, when it alleged 0%, I turned the laptop back on and got another 90 minutes of life out of it, at 0%. Is there a tool or utility for Toshiba laptops that can fix it back to being accurate? I've read the other threads on SU about cycling the battery but that just doesn't seem to do it in this case. Update: After the last cycle, the battery meter now reads 100% but the indicator on the front of the laptop is still orange (indicating not fully charged)

    Read the article

  • PCSX2 1.0.0 installation issues

    - by user205261
    I've followed all the tutorials I can find to get this emulator to work and I'm now getting this error. Granted I've gotten a lot further than when I started. Running 13.10. Just downloaded the Linux download file from the pcsx2 website and did the following in terminal: jonathan@Assassin:~$ sudo add-apt-repository ppa:gregory-hainaut/pcsx2.official.ppa [sudo] password for jonathan: The Official pcsx2 ppa, provided by the pcsx2 team. This ppa contains a regular package snapshot of pcsx2. We are not package experts, but we try to follow the debian policy. Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmpe5fwdz/secring.gpg' created gpg: keyring `/tmp/tmpe5fwdz/pubring.gpg' created gpg: requesting key 7A617FF4 from hkp server keyserver.ubuntu.com gpg: /tmp/tmpe5fwdz/trustdb.gpg: trustdb created gpg: key 7A617FF4: public key "Launchpad official ppa for pcsx2 team" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK jonathan@Assassin:~$ sudo apt-get update Err http://ppa.launchpad.net saucy/main amd64 Packages 404 Not Found Err http://ppa.launchpad.net saucy/main i386 Packages 404 Not Found Fetched 67.4 kB in 16s (3,977 B/s) W: Failed to fetch ppa.launchpad.net/gregory-hainaut/pcsx2.official.ppa/ubuntu/dists/saucy/main/binary-amd64/Packages 404 Not Found W: Failed to fetch ppa.launchpad.net/gregory-hainaut/pcsx2.official.ppa/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. jonathan@Assassin:~$ sudo apt-get install pcsx2 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package pcsx2 So I'm assuming I just need the way to get the two missing packages?

    Read the article

  • VPN with wrong subnet mask

    - by Philipp Schmid
    I followed these instructions on www.hottonetworking.com to set up VPN on a clean install of Windows Server 2008 SP2 (not R2 yet). When I then establish a VPN connection to that machine from a client machine (running Windows 7 RC), everything succeeds (it seems since I get a 'Connected' state in the network sharing center window), but I end up with a subnet mask (according to ipconfig /all) of 255.255.255.255 instead of 255.255.255.0. The net effect is that I don't have local network or internet capability. What additional configuration steps do I have to do to get VPN with the proper subnet mask working? Update: Using the steps outlined in the Technet article mentioned by Mr. Nimble, I was able to get internet connection. Apparently the subnet mask is not an issue as my coworker was able to connect using his VPN connection and ping the server machine by name as well.

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • "powershell has stopped working" on PS exit after creating IIS User

    - by Nick Jones
    On a Windows Server 2012 box, I'm using PS to create a new IIS User (for automated deployments using MSDeploy). The command itself appears to work fine -- the user is created -- but as soon as I exit my PowerShell session (typing exit or just closing the command window), a dialog is displayed stating "powershell has stopped working", with the following details: Problem signature: Problem Event Name: PowerShell NameOfExe: powershell.exe FileVersionOfSystemManagementAutomation: 6.2.9200.16628 InnermostExceptionType: Runtime.InteropServices.InvalidComObject OutermostExceptionType: Runtime.InteropServices.InvalidComObject DeepestPowerShellFrame: unknown DeepestFrame: System.StubHelpers.StubHelpers.GetCOMIPFromRCW ThreadName: unknown OS Version: 6.2.9200.2.0.0.400.8 Locale ID: 1033 The PS commands in question are: [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") [Microsoft.Web.Management.Server.ManagementAuthentication]::CreateUser("Foo", "Bar") Why is this happening and how can I avoid it? EDIT: I've also confirmed this be a problem with PowerShell 4.0, so I've added that tag. I've also submitted it on Connect. UPDATE: It appears that Windows Server 2012 R2 does not have this same bug.

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Disable Ethernet permanently to speed up boot time

    - by Anwar Shah
    I do not use the wired Ethernet Card. It seems to me that, Ubuntu is always trying in boot time to check the network via eth0, Which consumes some times and I guess this may slow down the boot process a bit. My dmesg output is below (partial) 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 1.985592] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:01/input/input5 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 1.985651] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 1.985693] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 2.056261] firewire_core: created device fw0: GUID 00023f87af41fd7d, S400 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 3.710435] EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null) A big time here..... 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 13.466642] ADDRCONF(NETDEV_UP): eth0: link is not ready 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.125296] Adding 1050620k swap on /dev/sda6. Priority:-1 extents:1 across:1050620k 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.226952] EXT4-fs (sda9): re-mounted. Opts: (null) 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.335012] snd_hda_intel 0000:00:1b.0: PCI INT A - GSI 22 (level, low) - IRQ 22 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.335091] snd_hda_intel 0000:00:1b.0: irq 45 for MSI/MSI-X 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.335128] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.346410] input: Ideapad extra buttons as /devices/platform/ideapad/input/input6 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.428551] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.436958] cfg80211: Calling CRDA to update world regulatory domain 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.476550] Linux video capture interface: v2.00 2012-06-11 23:06:47 Ubuntu-KDE kernel [ 14.486385] uvcvideo: Found UVC 1.00 device USB 2.0 Camera (04f2:b008) So, My question is How can I disable the Ethernet card completely, so that kernel will not try to use that?

    Read the article

  • apt-get doesn't see packages in my trivial repository

    - by lorin
    I've tried to set up a trivial repository with binary .debs for internal use, but apt-get doesn't see the packages. I've done the following: On the web server: Created the binary debs with dpkg-buildpackage Put all of the binary debs in a web-accessible directory which corresponds to http://www.example.com/packages Generated a Packages.gz file in the same directory by doing: dpkg-scansources . /dev/null | gzip -9c > Packages.gz On the client machine: Added the following line to my /etc/apt/sources.list file: deb http://www.example.com/packages / Ran: sudo apt-get update The output related to my trivial repository looked like this: Ign http://www.example.com Release.gpg Ign http://www.example.com/packages/ Translation-en_US Ign http://www.example.com Release Ign http://www.example.com Packages Ign http://www.example.com Packages Hit http://www.example.com Packages But I can't install the package by name. For example, there's a package called "python-nova" which corresponds to package python-nova_2011.3-custom~bzr680-0ubuntu1_all.deb I've tried to do: apt-get install python-nova, but I get the following error: $ sudo apt-get install python-nova Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package python-nova

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • iptables syn flood countermeasure

    - by Penegal
    I'm trying to adjust my iptables firewall to increase the security of my server, and I found something a bit problematic here : I have to set INPUT policy to ACCEPT and, in addition, to have a rule saying iptables -I INPUT -i eth0 -j ACCEPT. Here comes my script (launched manually for tests) : #!/bin/sh IPT=/sbin/iptables echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X echo "Defining logging policy for dropped packets" $IPT -N LOGDROP $IPT -A LOGDROP -j LOG -m limit --limit 5/min --log-level debug --log-prefix "iptables rejected: " $IPT -A LOGDROP -j DROP echo "Setting firewall policy" $IPT -P INPUT DROP # Deny all incoming connections $IPT -P OUTPUT ACCEPT # Allow all outgoing connections $IPT -P FORWARD DROP # Deny all forwaring echo "Allowing connections from/to lo and incoming connections from eth0" $IPT -I INPUT -i lo -j ACCEPT $IPT -I OUTPUT -o lo -j ACCEPT #$IPT -I INPUT -i eth0 -j ACCEPT echo "Setting SYN flood countermeasures" $IPT -A INPUT -p tcp -i eth0 --syn -m limit --limit 100/second --limit-burst 200 -j LOGDROP echo "Allowing outgoing traffic corresponding to already initiated connections" $IPT -A OUTPUT -p ALL -m state --state ESTABLISHED,RELATED -j ACCEPT echo "Allowing incoming SSH" $IPT -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT echo "Setting SSH bruteforce attacks countermeasures (deny more than 10 connections every 10 minutes)" $IPT -A INPUT -p tcp --dport 22 -m recent --update --seconds 600 --hitcount 10 --rttl --name SSH -j LOGDROP echo "Allowing incoming traffic for HTTP, SMTP, NTP, PgSQL and SolR" $IPT -A INPUT -p tcp --dport 25 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT $IPT -A INPUT -p udp --dport 123 -i eth0 -j ACCEPT $IPT -A INPUT -p tcp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 5433 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p tcp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT $IPT -A INPUT -p udp --dport 8983 -i eth0.2654 -s 172.16.0.2 -j ACCEPT echo "Allowing outgoing traffic for ICMP, SSH, whois, SMTP, DNS, HTTP, PgSQL and SolR" $IPT -A OUTPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 25 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 43 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 53 -o eth0 -j ACCEPT $IPT -A OUTPUT -p tcp --dport 80 -o eth0 -j ACCEPT $IPT -A OUTPUT -p udp --dport 80 -o eth0 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 5433 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p tcp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT #$IPT -A OUTPUT -p udp --dport 8983 -o eth0 -d 176.31.236.101 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 5433 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p udp --sport 8983 -o eth0.2654 -j ACCEPT $IPT -A OUTPUT -p icmp -j ACCEPT echo "Allowing outgoing FTP backup" $IPT -A OUTPUT -p tcp --dport 20:21 -o eth0 -d 91.121.190.78 -j ACCEPT echo "Dropping and logging everything else" $IPT -A INPUT -s 0/0 -j LOGDROP $IPT -A OUTPUT -j LOGDROP $IPT -A FORWARD -j LOGDROP echo "Firewall loaded." echo "Maintaining new rules for 3 minutes for tests" sleep 180 $IPT -nvL echo "Clearing firewall rules" $IPT -F $IPT -Z $IPT -t nat -F $IPT -t nat -Z $IPT -t mangle -F $IPT -t mangle -Z $IPT -X $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT When I launch this script (I only have a SSH access), the shell displays every message up to Maintaining new rules for 3 minutes for tests, the server is unresponsive during the 3 minutes delay and then resume normal operations. The only solution I found until now was to set $IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT, but this configuration does not protect me of any attack, which is a great shame for a firewall. I suspect that the error comes from my script and not from iptables, but I don't understand what's wrong with my script. Could some do-gooder explain me my error, please? EDIT: here comes the result of iptables -nvL with the "accept all input" ($IPT -P INPUT ACCEPT and $IPT -I INPUT -i eth0 -j ACCEPT) solution : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1 52 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.2 0.0.0.0/0 udp dpt:8983 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 2 728 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp spt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.78 tcp dpts:20:21 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (5 references) pkts bytes target prot opt in out source destination 0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 EDIT #2 : I modified my script (policy ACCEPT, defining authorized incoming packets then logging and dropping everything else) to write iptables -nvL results to a file and to allow only 10 ICMP requests per second, logging and dropping everything else. The result proved unexpected : while the server was unavailable to SSH connections, even already established, I ping-flooded it from another server, and the ping rate was restricted to 10 requests per second. During this test, I also tried to open new SSH connections, which remained unanswered until the script flushed rules. Here comes the iptables stats written after these tests : Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 6 360 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 100/sec burst 200 0 0 LOGDROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "w00tw00t.at.ISC.SANS." ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: anoticiapb.com.br" ALGO name bm TO 65535 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "Host: www.anoticiapb.com.br" ALGO name bm TO 65535 105 8820 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 10/sec burst 5 830 69720 LOGDROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: SSH side: source 0 0 LOGDROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 recent: UPDATE seconds: 600 hit_count: 10 TTL-Match name: SSH side: source 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:5433 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:5433 0 0 ACCEPT tcp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 tcp spt:8983 0 0 ACCEPT udp -- eth0.2654 * 172.16.0.1 0.0.0.0/0 udp spt:8983 16 1684 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 600 35520 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 0 0 LOGDROP tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 owner UID match 33 0 0 LOGDROP udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 owner UID match 33 116 11136 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT udp -- * eth0 0.0.0.0/0 0.0.0.0/0 udp dpt:80 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:5433 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:5433 0 0 ACCEPT tcp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 tcp dpt:8983 0 0 ACCEPT udp -- * eth0.2654 0.0.0.0/0 0.0.0.0/0 udp dpt:8983 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 0.0.0.0/0 tcp dpt:43 0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 91.121.190.18 tcp dpts:20:21 7 1249 LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain LOGDROP (11 references) pkts bytes target prot opt in out source destination 35 3156 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 1/sec burst 5 LOG flags 0 level 7 prefix `iptables rejected: ' 859 73013 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Here comes the log content added during this test : Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55666 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=52 TOS=0x00 PREC=0x00 TTL=51 ID=55667 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55668 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:51 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55669 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:52 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55670 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:54 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55671 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:58 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55672 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=6 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=7 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=8 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=9 Mar 28 09:52:59 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=59 Mar 28 09:53:00 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=152 Mar 28 09:53:01 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=246 Mar 28 09:53:02 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=339 Mar 28 09:53:03 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=432 Mar 28 09:53:04 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=524 Mar 28 09:53:05 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=617 Mar 28 09:53:06 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=711 Mar 28 09:53:07 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=804 Mar 28 09:53:08 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=176.31.236.101 DST=176.31.238.3 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=7430 SEQ=897 Mar 28 09:53:16 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61402 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:19 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61403 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:21 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=64 TOS=0x00 PREC=0x00 TTL=51 ID=55674 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK URGP=0 Mar 28 09:53:25 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=61404 DF PROTO=TCP SPT=57637 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55675 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=116 TOS=0x00 PREC=0x00 TTL=51 ID=55676 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:37 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55677 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:38 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55678 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55679 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:39 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5055 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:41 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55680 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:42 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5056 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 Mar 28 09:53:45 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:10:8c:cf:28:39:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=180 TOS=0x00 PREC=0x00 TTL=51 ID=55681 DF PROTO=TCP SPT=57504 DPT=22 WINDOW=501 RES=0x00 ACK PSH URGP=0 Mar 28 09:53:48 localhost kernel: iptables rejected: IN=eth0 OUT= MAC=00:25:90:54:d7:88:c0:62:6b:e3:5c:80:08:00 SRC=194.51.74.245 DST=176.31.238.3 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=5057 DF PROTO=TCP SPT=57638 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0 If I correctly interpreted these results, they say that ICMP rules were correctly interpreted by iptables, but SSH rules were not. This does not make any sense... Does somebody understand where my error comes from? EDIT #3 : After some more tests, I found out that commenting the SYN flood countermeasure removes the problem. I continue researches in this way but, meanwhile, if somebody sees my anti SYN flood rule error...

    Read the article

  • Unable to authenticate Windows XP clients agains Snow Leopard Server PDC after 10.6.2 upgrade

    - by Roland
    I have setup a Snow Leopard Server 10.6.1 as a PDC without problems to authenticate Windows XP clients. Joining a Windows XP client to the SLS PDC Domain and log in from a Windows XP client to the SLS PDC Domain are working. After the update to Snow Leopard Server 10.6.2 the authentication is broken. opendirectory_smb_pwd_check_ntlmv1 gave -14090 [eDSAuthFailed] By changing the Windows XP "Network security: LAN Manager authentication level" policy to NTVLM2 responses only the authentication agains a SMB share is possible, but trying to join SLS PDC Domain is still not possible. opendirectory_smb_pwd_check_ntlmv2 gave -14090 [eDSAuthFailed] Any ideas? Is anyone else having similar authentication difficulties?

    Read the article

< Previous Page | 967 968 969 970 971 972 973 974 975 976 977 978  | Next Page >