Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 189/1052 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Creating symlink for Postgres

    - by Edwin
    As a developer, I often ssh right into my local database, just to test my application before pushing my code. However, I find that every time I want to access Postgres, I have to type in postgres@ubuntu:~$ /usr/local/pgsql/bin/psql test whereas on my work machine, all I have to do is type postgres@ubuntu:~$ psql --dbname=test --username=user I tried creating a symlink, which was successful, but whenever I try connecting to it through this shortcut, I get the following error message: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? How do I get this to work? In case it makes any difference, I'm using a self-compiled version of the 9.1.x series.

    Read the article

  • Why isn't cron running my script?

    - by Jingqiang Zhang
    Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • TFS 2008 to TFS 2010 moves and some issues

    - by Enrique Lima
    There have been many things going on this year around TFS.  Most of them had to do with migrations (I don’t call them upgrades for the most part since it involved new hardware and such).  Many were implementations using the Conchango SfTS template (now EMC). But there were others that were CMMI or Agile 4.0. Everything would move just fine, no issues.  That was until you attempted to run Test Case Management or run the last configuration steps for Lab Management. There is an error that states a project is not ready to run or integrate with Test or Lab Management.  And while there was some documentation on how to adjust and update the Agile WITs to work with it, there was still some disconnect to making it work with CMMI. Now there is a great post on how to run the “fix” from end to end. Check the post here:  TFS 2010: Enable Test Case Management for upgraded Team Projects

    Read the article

  • Registration free hosting for ASP.NET web service

    - by Andrew
    I've built a simple ASP.NET web service, tested it locally and would like to test it when externally hosted. Are there free hosting services available where I can just upload the assembly and service description file and test it straight away. Without registering the account, etc. My service does not do anything malicious and I am ok to run it in a restricted (security sandbox, bandwith, calls per second, etc) environment? I have heard about appharbor.com but it looks like an overkill to test a simple web service.

    Read the article

  • Ubuntu 13.04 on Lenovo G580 > built-in (web)cam not recognized

    - by user159295
    I've installed Ubuntu 13.04 on a Lenovo G580. Now I noticed that the built-in (web)cam is not being recognized. Does anybody know how to fix this and get the webcam running? Any help is appreciated! Thanks I've downloaded/installed the suggested program and the webcam is working within that program. However, when running Camera Test from the System Testing here's what happens: when clicking on "Test", there's NO image showing up a few seconds after clicking on "Test", an error message is popping up saying "Sorry, Ubuntu 13.04 has experienced an internal error" When going to Skype, video is enable and under "select webcam" there's "Lenovo EasyCamera (/dev/video0) as only option selected. But, Skype doesn't recognize the camera as there's no pictures. Any ideas on how to get this straightened out?

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • What is the simplest human readable configuration file format?

    - by Juha
    Current configuration file is as follows: mainwindow.title = 'test' mainwindow.position.x = 100 mainwindow.position.y = 200 mainwindow.button.label = 'apply' mainwindow.button.size.x = 100 mainwindow.button.size.y = 30 logger.datarate = 100 logger.enable = True logger.filename = './test.log' This is read with python to a nested dictionary: { 'mainwindow':{ 'button':{ 'label': {'value':'apply'}, ... }, 'logger':{ datarate: {'value': 100}, enable: {'value': True}, filename: {'value': './test.log'} }, ... } Is there a better way of doing this? The idea is to get XML type of behavior and avoid XML as long as possible. The end user is assumed almost totally computer illiterate and basically uses notepad and copy-paste. Thus the python standard "header + variables" type is considered too difficult. The dummy user edits the config file, able programmers handle the dictionaries. Nested dictionary is chosen for easy splitting (logger does not need or even cannot have/edit mainwindow parameters).

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • How to copy files while keeping directory structure?

    - by Cubic
    This is for a java project, but the same concept can be applied more generally: Basically, I have a projects with all *.java files located in some sub directory of src. Now I want to grab all directories with the name test in that directory tree and move them into a new directory called tests, e.g.: src->com->a1 -> A.java -> B.java -> test -> test1.java -> test2.java to src->com->a1 -> A.java -> B.java tests->com->a1->test -> test1.java -> test2.java How would I best do that?

    Read the article

  • What is the reason that can't cron automatic run?

    - by Jingqiang Zhang
    OS : Ubuntu 12.04 Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • Why isn't this simple CRON job working?

    - by JakeRow123
    I am on Ubuntu server and would like to send an email to myself every 10 minutes (as a test). The code for that is in this file: /var/www/cron-test.php To set up the cron I typed: crontab -e and added this line to the bottom of the file using nano editor: ### email me every 10 min. */10 * * * * /var/www/cron-test.php But that script is not running every 10 minutes. I only recieve the email if I load the PHP script directly in my browser. The cron doesn't seem to be executing at all. What am I doing wrong? Also this is my first time setting up a cron so putting the cron script in my www folder is probably not a good idea, should I put it elsewhere? If so where? Also is there a cron error log? Where all failed crons can be seen?

    Read the article

  • What video graphic card would permit ubuntu's standard driver to work well?

    - by Rick
    I installed ubuntu 12.04. All seemed well until I installed the nvidia driver. Then crashola! This situation is untenable. It seems I cannot trust nvidia, and it seems that I cannot rely on ubuntu gurus to test 3rd party drivers. So, apparently some video card manufacturers do not care enough about the linux market to test their drivers, or are there too many 3rd party video cards so that ubuntu folks do not test any 3rd party video drivers? Hence the question: What video graphic card would permit me to use ubuntu's standard driver so that I do not have to rely on nvidia's or any other 3rd-party driver? Perhaps I could then install THAT card and have things work? The ubuntu standard driver actually worked prior to installing the nvidia driver, but not well, and that was because the display flickered, and flickering gives me a headache.

    Read the article

  • Why does cron.hourly not care about the existence of anacron?

    - by Oliver Salzburg
    This question came up over in Root Access. Why does the default /etc/crontab not check for existence (and executable flag) of /usr/sbin/anacron for the hourly entry? My /etc/crontab: # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) What makes hourly different from the others?

    Read the article

  • Automated tests for differencing algorithm

    - by Matthew Rodatus
    We are designing a differencing algorithm (based on Longest Common Subsequence) that compares a source text and a modified copy to extract the new content (i.e. content that is only in the modified copy). I'm currently compiling a library of test case data. We need to be able to run automated tests that verify the test cases, but we don't want to verify strict accuracy. Given the heuristic nature of our algorithm, we need our test pass/failures to be fuzzy. We want to specify a threshold of overlap between the desired result and the actual result (i.e. the content that is extracted). I have a few sketches in my mind as to how to solve this, but has anyone done this before? Does anyone have guidance or ideas about how to do this effectively?

    Read the article

  • Testing web applications written in java

    - by Vinoth Kumar
    How do you test the web applications (both server side and client side code)? The testing method has to work irrespective of the framework used (struts, spring web mvc) etc. I am using Java for the server side code, Javascript and HTML for the client side code. This is the sample test case of what I am talking about: 1. When you click on a link, the pop up opens. 2. Change some value in the pop up (say a drop down value) and it gets saved in the DB. 3. Click the popup again, you get the changed values. Can we simulate this kind of thing using unit test cases? Is JUnit enough for this?

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • can rsyslog transfer files present in a directory

    - by Tarun
    I have configured rsyslog and its working fine as its intended to be this is the conf files: Server Side: $template OTHERS,"/rsyslog/test/log/%fromhost-ip%/others-log.log" $template APACHEACCESS,"/rsyslog/test/log/%fromhost-ip%/apache-access.log" $template APACHEERROR,"/rsyslog/test/log/%fromhost-ip%/apache-error.log" if $programname == 'apache-access' then ?APACHEACCESS & ~ if $programname == 'apache-error' then ?APACHEERROR & ~ *.* ?OTHERS Client Side: # Apache default access file: $ModLoad imfile $InputFileName /var/log/apache2/access.log $InputFileTag apache-access: $InputFileStateFile stat-apache-access $InputFileSeverity info $InputRunFileMonitor #Apache default Error file: $ModLoad imfile $InputFileName /var/log/apache2/error.log $InputFileTag apache-error: $InputFileStateFile stat-apache-error $InputFileSeverity error $InputRunFileMonitor if $programname == 'apache-access' then @10.134.125.179:514 & ~ if $programname == 'apache-error' then @10.134.125.179:514 & ~ *.* @10.134.125.179:514 Now in rsyslog can I instead of defining separate files can I give the complete directory so that the client sends all the log files automatically present in the directory /var/log/apache2 and on syslog server side these files gets automatically stored in different filenames?

    Read the article

  • Editing a command-line argument to create a new variable

    - by user1883614
    I have a bash script called test.sh that uses command-line argument: lynx -dump $1 > $name".txt" But I need name to be created from the argument by specific keywords in the argument. An example is: http://www.pcmag.com/article2/0,2817,2412941,00.asp http://www.pcmag.com/article2/0,2817,2412919,00.asp Both are two separate articles but are the difference can only be seen in those 12 characters. How do I create a variable from a url for those 12 characters? So that when I run test.sh in Terminal: ./test.sh http://www.pcmag.com/article2/0,2817,2412941,00.asp there is a text file saved as 0,2817,2412941,00?

    Read the article

  • Deactivate one graphicdevice in a SLI system

    - by NobbZ
    As it is now, I have a Gaminglaptop with 2 GeForce 9800M GT with SLI activated, Ubuntu 12.04. When I use any graphicdriver that is aware of acceleration features the system crashes after a short period of time. After I removed and purged jockey my system runs stable but slowly, but I can use it for now (I have to choose recovery at startup and the resume boot, botting up normal results in a colorflickering screen). But since the system is runs slowly, I would like to try to work with only one graphicdevice. First test device A, if that doesn't work test with device B. A testing tool for Windows already told me, that one of the cards doesn't work properly, but it doesn't say which one (same crashes in Windows when programs or desktop uses acceleration). Therefore I wanted to ask, if there is a way to reinstall any nVidia (or third party) driver, deactivate one of the two devices and test the stability?

    Read the article

  • Mulitple processes aware of each other

    - by Abhan
    Hope you can help me with this, I've searched a lot and I got really confused of what to do here. I'm building a program in C and I need to run it multiple time as I need. So it's going to be like below, process 1 handle all rows on DB table test where process_flag=1 process 2 handle all rows on DB table test where process_flag=2 process 3 handle all rows on DB table test where process_flag=3 and so on How can I make the processes aware of each other, so if process 3 goes down then processes 2 & 3 start working on process_flag=3?

    Read the article

  • Added 2nd HDD, created new mount point for /mnt/datanew, get you are not the owner

    - by user212383
    I am completely new to Linux and have been asked to extend a VM running Ubuntu, I thought I would test this first so have just installed it in a test VM, I added the 2nd hard drive and used Gparted to format it with ext4 so I now have a drive called /dev/sdb1 I then created a new directory called mnt/datanew I then mounted that using the below command sudo mount -t ext4 /dev/sdb1 /mnt/datanew I thought I was doing well until when I went into home folder / file system mnt / datanew I noticed I couldn't create a new folder etc, I check the properties and it said I don't have permission as its all root How do I change this, I need to create some data and then test extending the partition as I want to see if it has any impact.

    Read the article

  • Installing multi-boot Linux system

    - by user94924
    I have a dual (XP/Linux) boot Compaq Presario S4500UK with a couple of spare partitions on the boot drive which I want to use for testing different Linux distros/configs before installing them on "live" machines. I tried this with Oneiric which worked fine until I came to uninstalling the test system. Fortunately Oneiric uses Grub2 and so redirecting the bootloader to work from the original Linux partition was a piece of cake - if a bit annoying. I now need to do some tests on Hardy and DSL (Damned Small Linux), neither of which use Grub2. Question: Is it possible to install a test operating system without adding/replacing the bootloader (ie so I can still use Grub2 from the original partition with its nice interface and recovery facilities)? Is there another way of uninstalling/decommisioning a test OS which doesn't trash the boot loader? (The only way I know to do this is by deleting/reformatting the partition which takes with it the bootloader which is in that partition). Any help would be much appreciated. Tks jg1

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >