Search Results

Search found 19103 results on 765 pages for 'foreign key'.

Page 726/765 | < Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >

  • Sum Values in Multidimensional Array

    - by lemonpole
    Hello all. I'm experimenting with arrays in PHP and I am setting up a fake environment where a "team's" record is held in arrays. $t1 = array ( "basicInfo" => array ( "The Sineps", "December 25, 2010", "lemonpole" ), "overallRecord" => array ( 0, 0, 0, 0 ), "overallSeasons" => array ( "season1.cs" => array (0, 0, 0), "season2.cs" => array (0, 0, 0) ), "matches" => array ( "season1.cs" => array ( "week1" => array ("12", "3", "1"), "week2" => array ("8", "8" ,"0"), "week3" => array ("8", "8" ,"0") ), "season2.cs" => array ( "week1" => array ("9", "2", "5"), "week2" => array ("12", "2" ,"2") ) ) ); What I am trying to achieve is to add all the wins, loss, and draws, from each season's week to their respective week. So for example, the sum of all the weeks in $t1["matches"]["season1.cs"] will be added to $t1["overallSeasons"]["season1.cs"]. The result would leave: "overallSeasons" => array ( "season1.cs" => array (28, 19, 1), "season2.cs" => array (21, 4, 7) ), I tried to work this out on my own for the past hour and all I have gotten is a little more knowledge of for-loops and foreach-loops :o... so I think I now have the basics down such as using foreach loops and so on; however, I am still fairly new to this so bear with me! I can get the loop to point to $t1["matches"] key and go through each season but I can't seem to figure out how to add all of the wins, loss, and draw, for each individual week. For now, I'm only looking for answers concerning the overall seasons sum since I can work from there once I figure out how to achieve this. Any help will be much appreciated but please, try and keep it simple for me... or comment the code accordingly please! Thanks!

    Read the article

  • Find Consecutive Rows & Calculate Duration

    - by MannyKo
    I have a set a of data that tells me if a couple of systems are available or not every 5 or 15 minutes increments. For now, the time increment shouldn't matter. The data looks like this: Status Time System_ID T 10:00 S01 T 10:15 S01 F 10:30 S01 F 10:45 S01 F 11:00 S01 T 11:15 S01 T 11:30 S01 F 11:45 S01 F 12:00 S01 F 12:15 S01 T 12:30 S01 F 10:00 S02 F 10:15 S02 F 10:30 S02 F 10:45 S02 F 11:00 S02 T 11:15 S02 T 11:30 S02 I want to create a view that tells when a system is NOT available (i.e. when it is F), from what time, to what time, and duration which is to - from. Desired results: System_ID From To Duration S01 10:30 11:00 00:30 S01 11:45 12:15 00:30 S02 10:00 11:00 01:00 Here is the script data: DROP SCHEMA IF EXISTS Sys_data CASCADE; CREATE SCHEMA Sys_data; CREATE TABLE test_data ( status BOOLEAN, dTime TIME, sys_ID VARCHAR(10), PRIMARY KEY (dTime, sys_ID) ); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '10:00:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '10:15:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:30:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:45:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '11:00:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '11:15:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '11:30:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '11:45:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '12:00:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '12:15:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '12:30:00', 'S01'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:00:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:15:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:30:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '10:45:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (FALSE, '11:00:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '11:15:00', 'S02'); INSERT INTO test_data (status, dTime, sys_ID) VALUES (TRUE, '11:30:00', 'S02'); Thank you in advance!

    Read the article

  • WP7 Button inside ListBox only "clicks" every other press

    - by Zik
    I have a button defined inside of a DataTemplate for my list box. <phone:PhoneApplicationPage.Resources> <DataTemplate x:Key="ListTemplate"> <Grid Margin="12,12,24,12"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button Grid.Column="0" Name="EnableDisableButton" Click="EnableDisableButton_Click" BorderBrush="Transparent"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Image Grid.Row="0" Source="\Images\img.dark.png" Width="48" Height="48" Visibility="{StaticResource PhoneDarkThemeVisibility}" /> <Image Grid.Row="0" Source="\Images\img.light.png" Width="48" Height="48" Visibility="{StaticResource PhoneLightThemeVisibility}" /> <Rectangle Grid.Row="1" Width="48" Height="8" Fill="{Binding CurrentColor}" RadiusX="4" RadiusY="4" /> </Grid> </Button> <Grid Grid.Column="1"> <... more stuff here ...> </Grid> </Grid> </DataTemplate> </phone:PhoneApplicationPage.Resources> What I'm seeing is that the first time I press the button, the Click event fires. The second time I press it, it does not fire. Third press, fires. Fourth press, does not fire. Etc. Originally I had it bound to a command but that was behaving the same way. (I put a Debug.WriteLine() in the event handler so I know when it fires.) Any ideas? It's really odd that the click event only fires every other time.

    Read the article

  • Indexing on only part of a field in MongoDB

    - by Rob Hoare
    Is there a way to create an index on only part of a field in MongoDB, for example on the first 10 characters? I couldn't find it documented (or asked about on here). The MySQL equivalent would be CREATE INDEX part_of_name ON customer (name(10));. Reason: I have a collection with a single field that varies in length from a few characters up to over 1000 characters, average 50 characters. As there are a hundred million or so documents it's going to be hard to fit the full index in memory (testing with 8% of the data the index is already 400MB, according to stats). Indexing just the first part of the field would reduce the index size by about 75%. In most cases the search term is quite short, it's not a full-text search. A work-around would be to add a second field of 10 (lowercased) characters for each item, index that, then add logic to filter the results if the search term is over ten characters (and that extra field is probably needed anyway for case-insensitive searches, unless anybody has a better way). Seems like an ugly way to do it though. [added later] I tried adding the second field, containing the first 12 characters from the main field, lowercased. It wasn't a big success. Previously, the average object size was 50 bytes, but I forgot that includes the _id and other overheads, so my main field length (there was only one) averaged nearer to 30 bytes than 50. Then, the second field index contains the _id and other overheads. Net result (for my 8% sample) is the index on the main field is 415MB and on the 12 byte field is 330MB - only a 20% saving in space, not worthwhile. I could duplicate the entire field (to work around the case insensitive search problem) but realistically it looks like I should reconsider whether MongoDB is the right tool for the job (or just buy more memory and use twice as much disk space). [added even later] This is a typical document, with the source field, and the short lowercased field: { "_id" : ObjectId("505d0e89f56588f20f000041"), "q" : "Continental Airlines", "f" : "continental " } Indexes: db.test.ensureIndex({q:1}); db.test.ensureIndex({f:1}); The 'f" index, working on a shorter field, is 80% of the size of the "q" index. I didn't mean to imply I included the _id in the index, just that it needs to use that somewhere to show where the index will point to, so it's an overhead that probably helps explain why a shorter key makes so little difference. Access to the index will be essentially random, no part of it is more likely to be accessed than any other. Total index size for the full file will likely be 5GB, so it's not extreme for that one index. Adding some other fields for other search cases, and their associated indexes, and copies of data for lower case, does start to add up, which I why I started looking into a more concise index.

    Read the article

  • (Strange) C++ linker error in constructor

    - by Microkernel
    I am trying to write a template class in C++ and getting this strange linker error and can't figureout the cause, please let me know whats wrong with this! Here is the error message I am getting in Visula C++ 2010. 1>------ Rebuild All started: Project: FlashEmulatorTemplates, Configuration: Debug Win32 ------ 1> main.cpp 1> emulator.cpp 1> Generating Code... 1>main.obj : error LNK2019: unresolved external symbol "public: __thiscall flash_emulator<char>::flash_emulator<char>(char const *,struct FLASH_PROPERTIES *)" (??0?$flash_emulator@D@@QAE@PBDPAUFLASH_PROPERTIES@@@Z) referenced in function _main 1>C:\Projects\FlashEmulator_templates\VS\FlashEmulatorTemplates\Debug\FlashEmulatorTemplates.exe : fatal error LNK1120: 1 unresolved externals ========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ========== Error message in g++ main.cpp: In function âint main()â: main.cpp:8: warning: deprecated conversion from string constant to âchar*â /tmp/ccOJ8koe.o: In function `main': main.cpp:(.text+0x21): undefined reference to `flash_emulator<char>::flash_emulator(char*, FLASH_PROPERTIES*)' collect2: ld returned 1 exit status There are 2 .cpp files and 1 header file, and I have given them below. emulator.h #ifndef __EMULATOR_H__ #define __EMULATOR_H__ typedef struct { int property; }FLASH_PROPERTIES ; /* Flash emulation class */ template<class T> class flash_emulator { private: /* Private data */ int key; public: /* Constructor - Opens an existing flash by name flashName or creates one with given FLASH_PROPERTIES if it doesn't exist */ flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ); /* Constructor - Opens an existing flash by name flashName or creates one with given properties given in configFIleName */ flash_emulator<T>( char *flashName, char *configFileName ); /* Destructor for the emulator */ ~flash_emulator(){ } }; #endif /* End of __EMULATOR_H__ */ emulator.cpp #include <Windows.h> #include "emulator.h" using namespace std; template<class T>flash_emulator<T>::flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ) { return; } template<class T>flash_emulator<T>::flash_emulator(char *flashName, char *configFileName) { return; } main.cpp #include <Windows.h> #include "emulator.h" int main() { FLASH_PROPERTIES properties = {0}; flash_emulator<char> myEmulator("C:\newEMu.flash", &properties); return 0; }

    Read the article

  • DisplayMemberPath is not working in WPF

    - by WpfBee
    I want to display CustomerList\CustomerName property items to the listBox using ItemsSource DisplayMemberPath property only. But it is not working. I do not want to use DataContext or any other binding in my problem. Please help. My code is given below: MainWindow.xaml.cs namespace BindingAnItemControlToAList { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } } public class Customer { public string Name {get;set;} public string LastName { get; set; } } public class CustomerList { public List<Customer> Customers { get; set; } public List<string> CustomerName { get; set; } public List<string> CustomerLastName { get; set; } public CustomerList() { Customers = new List<Customer>(); CustomerName = new List<string>(); CustomerLastName = new List<string>(); CustomerName.Add("Name1"); CustomerLastName.Add("LastName1"); CustomerName.Add("Name2"); CustomerLastName.Add("LastName2"); Customers.Add(new Customer() { Name = CustomerName[0], LastName = CustomerLastName[0] }); Customers.Add(new Customer() { Name = CustomerName[1], LastName = CustomerLastName[1] }); } } } **MainWindow.Xaml** <Window x:Class="BindingAnItemControlToAList.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:BindingAnItemControlToAList" Title="MainWindow" Height="350" Width="525" Loaded="Window_Loaded" > <Window.Resources> <local:CustomerList x:Key="Cust"/> </Window.Resources> <Grid Name="Grid1"> <ListBox ItemsSource="{Binding Source={StaticResource Cust}}" DisplayMemberPath="CustomerName" Height="172" HorizontalAlignment="Left" Margin="27,23,0,0" Name="lstStates" VerticalAlignment="Top" Width="245" /> </Grid> </Window>

    Read the article

  • Boost MultiIndex - objects or pointers (and how to use them?)?

    - by Sarah
    I'm programming an agent-based simulation and have decided that Boost's MultiIndex is probably the most efficient container for my agents. I'm not a professional programmer, and my background is very spotty. I've two questions: Is it better to have the container contain the agents (of class Host) themselves, or is it more efficient for the container to hold Host *? Hosts will sometimes be deleted from memory (that's my plan, anyway... need to read up on new and delete). Hosts' private variables will get updated occasionally, which I hope to do through the modify function in MultiIndex. There will be no other copies of Hosts in the simulation, i.e., they will not be used in any other containers. If I use pointers to Hosts, how do I set up the key extraction properly? My code below doesn't compile. // main.cpp - ATTEMPTED POINTER VERSION ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> typedef multi_index_container< Host *, indexed_by< // hash by Host::id hashed_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,Host::getID) > // arg errors here > // end indexed_by > HostContainer; ... int main() { ... HostContainer testHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); testHosts.insert( newHostPtr ); ... } I can't find a precisely analogous example in the Boost documentation, and my knowledge of C++ syntax is still very weak. The code does appear to work when I replace all the pointer references with the class objects themselves. As best I can read it, the Boost documentation (see summary table at bottom) implies I should be able to use member functions with pointer elements.

    Read the article

  • app-engine-rest-server to raise KeyError("name %s already used" % model_name)

    - by fx
    I'm playing with the project appengine-rest-server to create the REST webservices for all the existing models. I got a strange error, the first time I query the browser: http://localhost:8080/rest/metadata/user, it gives me the result: <xs:schema> - <xs:element name="user"> - <xs:complexType> - <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="key" type="xs:normalizedString"/> <xs:element maxOccurs="1" minOccurs="0" name="surname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="firstname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="ages" type="xs:long"/> <xs:element maxOccurs="1" minOccurs="0" name="sex" type="xs:boolean"/> <xs:element maxOccurs="1" minOccurs="0" name="updatedDate" type="xs:dateTime"/> <xs:element maxOccurs="1" minOccurs="0" name="createdDate" type="xs:dateTime"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> But refreshing the page, gives me this error: Traceback (most recent call last): File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3185, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3128, in _Dispatch base_env_dict=env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 515, in Dispatch base_env_dict=base_env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2387, in Dispatch self._module_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2297, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2195, in ExecuteOrImportScript script_module.main() File "/Users/foo/Documents/AppEngine/helloworld/main.py", line 48, in main rest.Dispatcher.add_models({"user": UserModel}) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 845, in add_models cls.add_model(model_name, model_type) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 863, in add_model raise KeyError("name %s already used" % model_name) KeyError: 'name user already used' Can someone give me the explanation on why it happens? Restarting the server, run on the browser again I get the xml result, but refreshing causes the error. Is it a bug in the appengine-rest-server application or it is in my code? My helloworld application is available for download here.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Weird vps server issue

    - by anon-user0
    I have an unmanaged linux vps Ubuntu 11.10 (Oneiric Ocelot). I have LNMP installed. Also php-fpm php-apc, varnish, memcache. I have (or rather had) several live sites on it. under normal load the server uses ~700 mb memory. But since last night its using only 20mb~ memory and a lot of the services seems to be down (according to htop) I only see nginx working and mysql starts up and goes does every few minutes on a loop. Here are some information on the server that might help you help me: root@server:~# uname -a Linux server 2.6.18-308.el5.028stab099.3 #1 SMP Wed Mar 7 15:56:00 MSK 2012 i686 i686 i386 GNU/Linux - root@server:~# ifconfig -a lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12515 errors:0 dropped:0 overruns:0 frame:0 TX packets:9541 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7191214 (7.1 MB) TX bytes:536726 (536.7 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:176.31.158.78 P-t-P:176.31.158.78 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 - root@server:~# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp6 0 0 [::]:http-alt [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 9307368 @/com/ubuntu/upstart - htop: http://i.stack.imgur.com/NHKYX.png EDIT: Stressed. mind was not working adding log: root@server:~# less /var/log/syslog Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:39:01 server CRON[9298]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 05:40:01 server CRON[9463]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 05:46:21 server sm-msp-queue[9480]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:19:14, xdelay=00:06:18, mailer=relay, pri=122407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 05:52:39 server sm-msp-queue[9480]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:06:32, xdelay=00:06:18, mailer=relay, pri=842407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:00:01 server CRON[15671]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:06:22 server sm-msp-queue[15690]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:39:15, xdelay=00:06:18, mailer=relay, pri=212407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:09:01 server CRON[18114]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:12:40 server sm-msp-queue[15690]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:26:33, xdelay=00:06:18, mailer=relay, pri=932407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:20:02 server CRON[21888]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:26:22 server sm-msp-queue[21907]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:59:15, xdelay=00:06:18, mailer=relay, pri=302407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:27:02 server CRON[24021]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 06:32:40 server sm-msp-queue[21907]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:46:33, xdelay=00:06:18, mailer=relay, pri=1022407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:39:01 server CRON[27941]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:40:02 server CRON[28110]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:46:22 server sm-msp-queue[28125]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:19:15, xdelay=00:06:18, mailer=relay, pri=392407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:06:33, xdelay=00:06:18, mailer=relay, pri=1112407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: q5R2e4uo028125: sender notify: Warning: could not send message for past 4 hours Jun 27 06:52:44 server sm-msp-queue[28125]: q5R2e4uo028125: to=root, delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=33690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:00:02 server CRON[1543]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:06:21 server sm-msp-queue[1560]: q5R2e4uo028125: to=root, delay=00:13:41, xdelay=00:06:18, mailer=relay, pri=123690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:09:01 server CRON[3986]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 07:12:39 server sm-msp-queue[1560]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:45:32, xdelay=00:06:18, mailer=relay, pri=482407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:18:57 server sm-msp-queue[1560]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:32:50, xdelay=00:06:18, mailer=relay, pri=1202407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:20:02 server CRON[7760]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:26:22 server sm-msp-queue[7775]: q5R2e4uo028125: to=root, delay=00:33:42, xdelay=00:06:18, mailer=relay, pri=213690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:27:01 server CRON[9887]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 07:32:40 server sm-msp-queue[7775]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=02:05:33, xdelay=00:06:18, mailer=relay, pri=572407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:38:58 server sm-msp-queue[7775]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:52:51, xdelay=00:06:18, mailer=relay, pri=1292407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:39:01 server CRON[13813]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth : root@server:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 20G 2.3G 18G 12% / - Jun 26 16:22:41 server varnishd[1413]: Child (32425) died signal=3 Jun 26 16:22:41 server varnishd[1413]: child (21687) Started Jun 26 16:22:41 server varnishd[1413]: Child (21687) said Child starts Jun 26 16:22:41 server varnishd[1413]: Child (21687) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 26 16:34:28 server -- MARK -- Jun 26 16:54:29 server -- MARK -- Jun 26 17:14:29 server -- MARK -- Jun 26 17:34:29 server -- MARK -- Jun 26 17:54:29 server -- MARK -- Jun 26 18:14:29 server -- MARK -- Jun 26 18:34:29 server -- MARK -- Jun 26 18:54:29 server -- MARK -- Jun 26 19:14:29 server -- MARK -- Jun 26 19:34:29 server -- MARK -- Jun 26 19:54:29 server -- MARK -- Jun 26 20:14:29 server -- MARK -- Jun 26 20:34:29 server -- MARK -- Jun 26 20:48:12 server exiting on signal 15 Jun 26 20:51:58 server syslogd 1.5.0#6ubuntu1: restart. Jun 26 20:52:01 server varnishd[1324]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 26 21:11:58 server -- MARK -- Jun 26 21:31:58 server -- MARK -- Jun 26 21:51:58 server -- MARK -- Jun 26 22:11:58 server -- MARK -- Jun 26 22:31:58 server -- MARK -- Jun 26 22:51:58 server -- MARK -- Jun 26 23:11:58 server -- MARK -- Jun 26 23:31:58 server -- MARK -- Jun 26 23:51:58 server -- MARK -- Jun 27 00:11:58 server -- MARK -- Jun 27 00:23:42 server exiting on signal 15 Jun 27 02:21:10 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 02:21:12 server varnishd[1341]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 02:41:10 server -- MARK -- Jun 27 02:46:41 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:44 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:46 server varnishd[1238]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:20:46 server varnishd[1238]: child (1239) Started Jun 27 03:20:46 server varnishd[1238]: Child (1239) said Child starts Jun 27 03:20:46 server varnishd[1238]: Child (1239) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 27 03:32:52 server exiting on signal 15 Jun 27 03:33:16 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:33:31 server varnishd[1372]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:53:16 server -- MARK -- Jun 27 04:13:16 server -- MARK -- Jun 27 04:33:16 server -- MARK -- Jun 27 04:53:16 server -- MARK -- Jun 27 05:13:16 server -- MARK -- Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:53:17 server -- MARK -- Jun 27 06:13:17 server -- MARK -- Jun 27 06:33:17 server -- MARK -- Jun 27 06:53:17 server -- MARK -- Jun 27 07:13:17 server -- MARK -- Jun 27 07:33:17 server -- MARK -- Jun 27 07:53:17 server -- MARK -- Jun 27 08:13:17 server -- MARK -- Jun 27 08:33:17 server -- MARK -- Jun 27 08:53:17 server -- MARK -- Jun 27 09:13:17 server -- MARK -- Jun 27 09:33:17 server -- MARK -- Jun 27 09:53:17 server -- MARK -- Jun 27 10:13:17 server -- MARK -- Jun 27 10:33:17 server -- MARK -- Jun 27 10:53:17 server -- MARK -- Jun 27 11:13:17 server -- MARK -- Jun 27 11:33:17 server -- MARK -- Jun 27 11:53:18 server -- MARK -- Jun 27 12:13:18 server -- MARK -- Jun 27 12:33:18 server -- MARK -- Jun 27 12:53:18 server -- MARK -- Jun 27 13:13:18 server -- MARK -- Jun 27 13:33:18 server -- MARK -- Jun 27 13:53:18 server -- MARK -- Jun 27 14:13:18 server -- MARK -- Jun 27 14:33:18 server -- MARK -- Jun 27 14:53:18 server -- MARK -- -- root@server:~# cat /var/log/nginx/error.log 2012/06/27 03:32:54 [alert] 1199#0: worker process 1203 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1200 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1201 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1202 exited on signal 9 root@server:~# cat /var/log/nginx/access.log 31.210.99.87 - - [27/Jun/2012:09:09:08 +0400] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 172 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cms/cmx.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /iesvc/iesvc.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cmd2/index.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:09 +0400] "GET /cmd/index.jsp HTTP/1.1" 301 184 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:19 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:37 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:48 +0400] "-" 400 0 "-" "-" - root@server:~# cat /var/log/daemon.log Jun 26 20:48:10 server xinetd[1177]: Exiting... Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 26 20:51:58 server xinetd[1174]: Started working: 0 available services Jun 26 20:52:01 server vnstatd[1330]: vnStat daemon 1.11 started. Jun 26 20:52:01 server vnstatd[1330]: Monitoring: venet0 Jun 27 00:23:41 server xinetd[1174]: Exiting... Jun 27 02:21:12 server vnstatd[1349]: vnStat daemon 1.11 started. Jun 27 02:21:12 server vnstatd[1349]: Monitoring: venet0 Jun 27 03:20:44 server xinetd[1166]: attribute: disable should not be in default section [file=/etc/xinetd.conf] [line=12] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/chargen [file=/etc/xinetd.conf] [line=15] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 27 03:20:44 server xinetd[1166]: Started working: 0 available services Jun 27 03:20:46 server vnstatd[1249]: vnStat daemon 1.11 started. Jun 27 03:20:46 server vnstatd[1249]: Monitoring: venet0 Jun 27 03:32:41 server xinetd[1166]: Exiting... Jun 27 03:33:32 server vnstatd[1380]: vnStat daemon 1.11 started. Jun 27 03:33:32 server vnstatd[1380]: Monitoring: venet0 root@server:~# - Anything else you need let me know

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • postfix with mailman

    - by Thufir
    What should happen is that [email protected] should be delivered to that users inbox on localhost, user@localhost. Thunderbird works fine at reading user@localhost. I'm just using a small portion of postfix-dovecot with Ubuntu mailman. How can I get postfix to recognize the FQDN and deliver them to a localhost inbox? root@dur:~# root@dur:~# tail /var/log/mail.err;tail /var/log/mailman/subscribe;postconf -n Aug 27 18:59:16 dur dovecot: lda(root): Error: chdir(/root) failed: Permission denied Aug 27 18:59:16 dur dovecot: lda(root): Error: user root: Initialization failed: Initializing mail storage from mail_location setting failed: stat(/root/Maildir) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root, dir owned by 0:0 mode=0700) Aug 27 18:59:16 dur dovecot: lda(root): Fatal: Invalid user settings. Refer to server log for more information. Aug 27 20:09:16 dur postfix/trivial-rewrite[15896]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:19:17 dur postfix/trivial-rewrite[16569]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:27:00 dur postfix[17042]: fatal: usage: postfix [-c config_dir] [-Dv] command Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:59:07 dur postfix/postfix-script[17459]: error: unknown command: 'restart' Aug 27 22:59:07 dur postfix/postfix-script[17460]: fatal: usage: postfix start (or stop, reload, abort, flush, check, status, set-permissions, upgrade-configuration) Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:39:03 2012 (16734) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 21:40:37 2012 (16749) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 22:45:31 2012 (17288) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 22:45:46 2012 (17293) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 23:02:01 2012 (17588) test3: pending [email protected] 127.0.0.1 Aug 27 23:05:41 2012 (17652) test4: pending [email protected] 127.0.0.1 Aug 27 23:56:20 2012 (17985) test5: pending [email protected] 127.0.0.1 alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# there's definitely a transport problem: root@dur:~# root@dur:~# root@dur:~# grep transport /var/log/mail.log | tail Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: transport_maps lookup failure Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: transport_maps lookup failure root@dur:~# trying to add the transport file: EDIT root@dur:~# root@dur:~# touch /etc/postfix/transport root@dur:~# ll /etc/postfix/transport -rw-r--r-- 1 root root 0 Aug 28 00:16 /etc/postfix/transport root@dur:~# root@dur:~# cd /etc/postfix/ root@dur:/etc/postfix# root@dur:/etc/postfix# postmap transport root@dur:/etc/postfix# root@dur:/etc/postfix# cat transport

    Read the article

  • Configuring Cisco 877W router from scratch for DHCP, WiFi, ADSL2+, NAT

    - by David M Williams
    Hi all, I apologise if this is a BIG question but I am quite lost with the Cisco IOS. I know what I want to achieve just not how to do it :( I have a Cisco 877W router with 4 FastEthernet interfaces, 1 ATM interface and 1 802.11 Radio. I want to set it up for a small network and am trying to construct a configuration below. I was using Google to try and flesh it out but I think I need help and guidance from actual experts! If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software Here's what I have so far, which hopefully outlines clearly enough what I am wanting to do. The bits in angle brackets are placeholders (eg the secret password). ! ! Set router hostname ! hostname Shazam ! ! Set usernames and passwords ! username david privilege 15 secret 0 <PASSWORD> enable secret <SECRETPASSWORD> ! ! Configure SSH and telnet access ! line vty 0 4 privilege level 15 login local transport input telnet ssh ! ! Local logging ! logging buffered 51200 warning ! ! Set date and time for NSW, Australia (GMT +10h) ! ! ! Set router IP address to 192.168.1.1 on FastEthernet0 port ! interface FastEthernet0 ip address 192.168.1.1 255.255.255.0 no shut ip nat inside ! ! Forward any unknown DNS requests to Google ! ip dns server ip name-server 8.8.8.8 ip name-server 8.8.4.4 ! ! Set up DHCP ! DHCP pool covers 192.168.1.100 - .199 ! Set gateway and DNS server to be the router, ie 192.168.1.1 ! service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool <DHCPPOOLNAME> network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ! ! DHCP reservations ! ! Assign IP address 192.168.1.105 to MAC address 00-21-5D-2F-58-04 ! ! Configure ADSL2 connection details ! interface atm dsl operating-mode adsl2+ ! ! Set up NAT rules ! ! Forward port 35394 to 192.168.1.105 ! ! Set up WiFi ! ! SSID visible, WPA2 security, Pre-shared key I'm hoping most of this is boiler-plate stuff to you guys. I'm keen to not just get a working script but to actually understand it also. Unfortunately, I'm finding the Cisco reference material online very complex. Thank you!

    Read the article

  • Websphere federated repository for Active Directory

    - by Drakiula
    Hi, What I am trying to achieve is to have Websphere 6.1 use Active Directory users authentication. Websphere is running on Windows 2008 R2. What I've done already: Succesfully setup a federated repository for Windows Active Directory (LDAP); Create a realm definition for the federated repository previously defined; Set the realm definition as the current real definition. Stop the Websphere service. When I attempt to start the Websphere service again, it crashes with the following stacktrace: ------Start of DE processing------ = [9/3/10 2:36:14:133 PDT] , key = com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.registry.UserRegistryImpl.createCredential 824 Exception = com.ibm.websphere.security.EntryNotFoundException Source = com.ibm.ws.security.registry.UserRegistryImpl.createCredential probeid = 824 Stack Dump = com.ibm.websphere.wim.exception.EntityNotFoundException: CWWIM4001E The 'null' entity was not found. at com.ibm.ws.wim.registry.util.UniqueIdBridge.getUniqueUserId(UniqueIdBridge.java:233) at com.ibm.ws.wim.registry.WIMUserRegistry$6.run(WIMUserRegistry.java:351) at com.ibm.ws.wim.security.authz.jacc.JACCSecurityManager.runAsSuperUser(JACCSecurityManager.java:500) at com.ibm.ws.wim.security.authz.ProfileSecurityManager.runAsSuperUser(ProfileSecurityManager.java:964) at com.ibm.ws.wim.registry.WIMUserRegistry.getUniqueUserId(WIMUserRegistry.java:340) at com.ibm.ws.security.registry.UserRegistryImpl.createCredential(UserRegistryImpl.java:750) at com.ibm.ws.security.ltpa.LTPAServerObject.authenticate(LTPAServerObject.java:776) at com.ibm.ws.security.server.lm.ltpaLoginModule.login(ltpaLoginModule.java:453) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:795) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:209) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:709) at java.security.AccessController.doPrivileged(AccessController.java:246) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:706) at javax.security.auth.login.LoginContext.login(LoginContext.java:603) at com.ibm.ws.security.auth.JaasLoginHelper.jaas_login(JaasLoginHelper.java:376) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3513) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3306) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3086) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:2180) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:1972) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2530) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2560) at com.ibm.ws.security.core.SecurityContext.enable(SecurityContext.java:83) at com.ibm.ws.security.core.distSecurityComponentImpl.initialize(distSecurityComponentImpl.java:379) at com.ibm.ws.security.core.distSecurityComponentImpl.startSecurity(distSecurityComponentImpl.java:336) at com.ibm.ws.security.core.SecurityComponentImpl.startSecurity(SecurityComponentImpl.java:105) at com.ibm.ws.security.core.ServerSecurityComponentImpl.start(ServerSecurityComponentImpl.java:283) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ApplicationServerImpl.start(ApplicationServerImpl.java:197) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:526) at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:192) at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:140) at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:461) at com.ibm.ws.runtime.WsServer.main(WsServer.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183) at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90) at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72) at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at org.eclipse.core.launcher.Main.invokeFramework(Main.java:336) at org.eclipse.core.launcher.Main.basicRun(Main.java:280) at org.eclipse.core.launcher.Main.run(Main.java:977) at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:329) at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:92) Dump of callerThis = Object type = com.ibm.ws.security.registry.UserRegistryImpl com.ibm.ws.security.registry.UserRegistryImpl@68a068a0 Anybody maybe has a hint on this? I followed the exact steps described in the IBM Infocenter for setting this up. Thanks in advance for the help.

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • Setting up Edimax EW-7206APg as Universal Repeater

    - by Ondra Žižka
    Hi, I've troubles setting up Edimax EW-7206APg as a Universal Repeater. I've read few manuals, but they are unclear on certain points. I've managed the repeater to get to a state when it's in a "connected" state. I've set the same WPA passphrase as the router has because I haven't seen any other place to set it at. These are my settings: System Uptime 0day:1h:33m:11s Hardware Version Rev. A Runtime Code Version 1.32 Wireless Configuration Mode Universal Repeater ESSID edimax Channel Number 6 Security WPA-shared key BSSID 00:c0:9f:40:bd:38 Associated Clients 0 Wireless Repeater Interface Configuration ESSID Dusan Security WPA BSSID 00:4f:62:23:8f:7e State Connected LAN Configuration IP Address 192.168.0.10 Subnet Mask 255.255.255.0 Default Gateway 192.168.0.1 MAC Address 00:c0:9f:40:bd:37 This is ipconfig /all: Prípona DNS podle pripojení . . . : riomail.cz Popis . . . . . . . . . . . . . . : Intel(R) PRO/Wireless 2200BG Network Connection Fyzická Adresa. . . . . . . . . . : 00-0E-35-3D-77-68 Protokol DHCP povolen . . . . . . : Ano Automatická konfigurace povolena : Ano Adresa IP . . . . . . . . . . . . : 192.168.0.5 Maska podsíte . . . . . . . . . . : 255.255.255.0 Výchozí brána . . . . . . . . . . : 192.168.0.1 Server DHCP . . . . . . . . . . . : 192.168.0.1 Servery DNS . . . . . . . . . . . : 94.74.192.252 94.74.192.244 I can ping the repeater, I can ping the root AP, but not a DNS server or any other IP beyond the root AP. Anyone has an idea what's wrong? Thanks, Ondra

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • Internal drives vs USB-3 with external SSD or eSata with External SSD

    - by normstorm
    I have a need to carry VMWare Virtual Machines with me for work. These are very large files (each VM is 20GB or more) and I carry around about 40 to 50 VM's to simulate different software configurations for different client needs. Key: they won't fit on the internal hard drive of my current laptop. I currently execute the VM's from an external 7200RPM 2.5" USB-2 drive. I keep copies of the VM's on other 5400 external USB-2 drives. The VM's work from this drive, but they are slow, costing me much time and frustration. It can take upwards of 30 minutes just to make a copy of one of the VM's. They can take upwards of 10-15 minutes to fully launch and then they operate sluggishly. I am buying a new laptop (Core I7, 8GB RAM and other high-end specs). I intend to buy an SSD for the O/S volume (C:). This SSD will not be large enough to hold the VM's. I have always wanted a second internal hard drive to operate the VM's. To have two hard drives, though, I am finding that I will have to go to a 17" laptop which would be bulky/heavy. I am instead considering purchasing a 15" laptop with either an eSATA port or USB-3 ports and then purchasing two external drives. One of the drives might be an external SSD (maybe OCX brand) for operating the VM's and the other a 7400RPM 1TB hard drive for carrying around the VM's not currently in use. The question is which options would give me the biggest bang for the buck and the weight: 1) 2nd Internal SSD hard drive. This would mean buying a 17" laptop with two drive "bays". The first bay would hold an SSD drive for the C: drive. I would leave the first bay empty from the manufacture and then purchase/install an aftermarket SSD drive. This second SSD drive would have to be very large (256 GB), which would be expensive. I would still also need another external hard drive for carrying around the VM's not in use. 2) 2nd internal hard drive - 7400 RPM. Again, a 17" laptop would be required, but there are models available with on SSD drive for the C: drive and a second 7200 RPM hard drives. The second drive could probably be large enough to hold the VM's in use as well as those not in use. But would it be fast enough to drive the VM's? 3) USB-3 with External SSD. I could buy a 15" laptop with an SSD drive for the C: drive and a second hard drive for general files. I would operate the VM's from an external USB-3 SSD drive and have a third USB-3 external 7200 RPM drive for holding the VM's not in use. 4) eSATA with External SSD. Ditto, just eSATA instead of USB-3 5) USB-3 with External 7400 RPM drive. Ditto, but the drive running the VM's would be USB-3 attached 7400 RPM drives rather than SSD. 6) eSATA with External 7400 RPM drive. Dittor, but the drive running the VM's would be eSATA attached 7400 RPM drives rather than SSD. Any thoughts on this and any creative solutions?

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • Cisco ASA (Client VPN) to LAN - through second VPN to second LAN

    - by user50855
    We have 2 site that is linked by an IPSEC VPN to remote Cisco ASAs: Site 1 1.5Mb T1 Connection Cisco(1) 2841 Site 2 1.5Mb T1 Connection Cisco 2841 In addition: Site 1 has a 2nd WAN 3Mb bonded T1 Connection Cisco 5510 that connects to same LAN as Cisco(1) 2841. Basically, Remote Access (VPN) users connecting through Cisco ASA 5510 needs access to a service at the end of Site 2. This is due to the way the service is sold - Cisco 2841 routers are not under our management and it is setup to allow connection from local LAN VLAN 1 IP address 10.20.0.0/24. My idea is to have all traffic from Remote Users through Cisco ASA destined for Site 2 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 2 has come via Site 1. I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible? Secondly, can anyone help me to correct the configuration bellow or point me in the direction of an example of such a configuration? Many Thanks. interface Ethernet0/0 nameif outside security-level 0 ip address 7.7.7.19 255.255.255.240 interface Ethernet0/1 nameif inside security-level 100 ip address 10.20.0.249 255.255.255.0 object-group network group-inside-vpnclient description All inside networks accessible to vpn clients network-object 10.20.0.0 255.255.255.0 network-object 10.20.1.0 255.255.255.0 object-group network group-adp-network description ADP IP Address or network accessible to vpn clients network-object 207.207.207.173 255.255.255.255 access-list outside_access_in extended permit icmp any any echo-reply access-list outside_access_in extended permit icmp any any source-quench access-list outside_access_in extended permit icmp any any unreachable access-list outside_access_in extended permit icmp any any time-exceeded access-list outside_access_in extended permit tcp any host 7.7.7.20 eq smtp access-list outside_access_in extended permit tcp any host 7.7.7.20 eq https access-list outside_access_in extended permit tcp any host 7.7.7.20 eq pop3 access-list outside_access_in extended permit tcp any host 7.7.7.20 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq www access-list outside_access_in extended permit tcp any host 7.7.7.21 eq https access-list outside_access_in extended permit tcp any host 7.7.7.21 eq 5721 access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient any access-list acl-vpnclient extended permit ip object-group group-inside-vpnclient object-group group-adp-network access-list acl-vpnclient extended permit ip object-group group-adp-network object-group group-inside-vpnclient access-list PinesFLVPNTunnel_splitTunnelAcl standard permit 10.20.0.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 10.20.1.0 255.255.255.0 access-list inside_nat0_outbound_1 extended permit ip 10.20.0.0 255.255.255.0 host 207.207.207.173 access-list inside_nat0_outbound_1 extended permit ip 10.20.1.0 255.255.255.0 host 207.207.207.173 ip local pool VPNPool 10.20.1.100-10.20.1.200 mask 255.255.255.0 route outside 0.0.0.0 0.0.0.0 7.7.7.17 1 route inside 207.207.207.173 255.255.255.255 10.20.0.3 1 crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto dynamic-map outside_dyn_map 20 set security-association lifetime seconds 288000 crypto dynamic-map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set reverse-route crypto map outside_map 20 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto map outside_dyn_map 20 match address acl-vpnclient crypto map outside_dyn_map 20 set security-association lifetime seconds 28800 crypto map outside_dyn_map 20 set security-association lifetime kilobytes 4608000 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 20 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 group-policy YeahRightflVPNTunnel internal group-policy YeahRightflVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 vpn-tunnel-protocol IPSec password-storage disable pfs disable split-tunnel-policy tunnelspecified split-tunnel-network-list value acl-vpnclient default-domain value YeahRight.com group-policy YeahRightFLVPNTunnel internal group-policy YeahRightFLVPNTunnel attributes wins-server value 10.20.0.9 dns-server value 10.20.0.9 10.20.0.7 vpn-tunnel-protocol IPSec split-tunnel-policy tunnelspecified split-tunnel-network-list value YeahRightFLVPNTunnel_splitTunnelAcl default-domain value yeahright.com tunnel-group YeahRightFLVPN type remote-access tunnel-group YeahRightFLVPN general-attributes address-pool VPNPool tunnel-group YeahRightFLVPNTunnel type remote-access tunnel-group YeahRightFLVPNTunnel general-attributes address-pool VPNPool authentication-server-group WinRadius default-group-policy YeahRightFLVPNTunnel tunnel-group YeahRightFLVPNTunnel ipsec-attributes pre-shared-key *

    Read the article

< Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >