Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 323/1421 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • how to make run cron on OSX 10.6.2?

    - by Radek
    Note: this question is not about how to edit cron tab but how to make cron work I edited my cron using env EDITOR=joe crontab -e I entered 1 * * * * echo 'test' > /Users/radek/Backup/rationalvmware/test.txt and it does nothing although the cron is set up correctly. Checked via Cronnix and viewed the cron in /var/cron/tabs. Editing crontab using Cronnix gives me the same results. If I run echo 'test' > /Users/radek/Backup/rationalvmware/test.txt manually it creates a files as expected so I assume that the command I provide to cron is correct one. Is there anything special I have to do to make cron work on OSX? How can I check it the the cron is running. What's the equivalent of /var/log/messages on OSX? I can see in messages on SuSE that cron works.

    Read the article

  • Adding a KPI to an SQL Server Analysis Services Cube

    Key Performance Indicators, which vary according to the application, are widely used as a measure of the performance of parts of an organisation. Analysis Services makes this KPI data easily available to your cube. All you have to do is to follow Rob Sheldon's simple instructions.

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • BI&EPM in Focus April 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Oracle OpenWorld call for papers now open, now through April 9 (link) Oracle Announces Availability of Oracle Exalytics In-Memory Machine (link) Oracle EPM and BI Support Newsletter Current Edition - Volume 3 : March 2012 (link) Customers Asiana Airlines Improves Passenger Management with Near-Real-Time Reservation and Ticketing Information  Centraal Boekhuis Delivers Faster with Oracle BI 11g Essatto Software Speeds Data Aggregation Tenfold; Integrates BI, Performance Management, and Data Warehousing for Midsize Businesses Grupo WTorre Supports Management's Decision-Making with OBIEE, Ensuring Uniform, Reliable, and Consistent Data Indian Overseas Bank Cuts Planning Schedule by 45 Worker Days per Year, Assesses Market Risk Instantly with Business Intelligence System Kentucky Community and Technical College System Enables Data-Driven Decision-Making Using Integrated System with Management Dashboards National Australia Bank Achieves 200% ROI, Improves Data Quality and Reporting Integrity with Oracle Hyperion DRM R.L. Polk & Co. Enhances Business Intelligence Capabilities, Optimizes System Performance with Extreme Analytics Machine Test ResCare, Inc. Transforms Reporting to Improve Healthcare Service Performance with Oracle Business Analytics  Rochester City School District Uses OBIEE to Track Student Achievement, Identify Areas for Improvement, Accelerate Reporting  Société Générale Standardizes, Accelerates, and Improves Budget Planning Accuracy across Global Enterprise The State Accounting Office of Georgia Integrates Financial Information, Shortens Financial Closings and Streamlines Reporting across 175 Organizations   Events 4-day Oracle Real-Time Decisions Hands-on Technical Workshop for Partners (PTS, Free) May 14-17, 2012: Colombes, Paris, France Nordic events : “Latest Release of Oracle Hyperion EPM and BI Suites Helps Organizations Plan through Uncertainty, Improve Decision-Making and Meet Regulatory Requirements” (April 17, Sweden | April 18, Norway | April 19, Denmark | April 24, Finland) Webcast Replay from Balaji Yelamanchili and Paul Rodwick: “Analytics Without Limits - The Latest on Oracle Exalytics In-Memory Machine and Oracle Business Intelligence”  (link)  Wednesday, April 04, 2012: Business Analytics launch webcast: Invite your customers to register (link) Big Data Online Forum now available on Demand (link)  Enterprise Performance Management Webcast Replay: Accurate Forecasting within the Business Planning Cycle (link) Oracle Hyperion Profitability and Cost Management (HPCM) Master Support Note (link) Business  Intelligence Whitepaper: Driving Innovation Through Analytics (link) Gartner: CIOs Identify BI as the No. 1 Technology Priority for 2012 (link) Webcast Replay: Exalytics in Action: Airlines, US Census and Federal Spending Demo Applications  (link) NEWLY RELEASED Walk-in Video for Exalytics - Use This to Start Customer/Partner Meetings! (link) IDC Insight Paper: “Oracle's All-Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages” (link) System Requirements and Supported Platforms for Oracle Business Intelligence Suite Enterprise Edition 11gR1 Certification Matrix now published to include OBIEE 11.1.1.6.0 (link) Maintenance Release Guide (List of Bugs Fixed) for Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.6.0  (link) OBIEE 11.1.1.6: Is OBIEE 11.1.1.6 Certified With OBI Apps 7.9.6.3?  (link) Information Center: Troubleshooting Oracle Business Intelligence Applications (support login req'd)  (link)      

    Read the article

  • Final Series Webcast: Exalogic Enables Super Fast Oracle Apps - Nov 29, 18:00 GMT

    - by swalker
    Invite customers to learn how Exalogic provides high-speed performance for Oracle JD Edwards, E-Business Suite, and PeopleSoft Enterprise applications. The third and final webcast in this series "Oracle Exalogic for Oracle PeopleSoft Applications," November 29, 2011 at 10AM PT, will show customers how Exalogic can simplify their PeopleSoft architecture and help them achieve new levels of performance, scalability, and reliability. Share this evite.

    Read the article

  • Active FTP client blocked on Windows XP

    - by Ian Hannah
    Summary We have a FTP server (running in active mode). We have an FTP client which is connecting to the server, carrying out a task and then closing the connection. The FTP Client can perform this operation on multiple threads. Problem We have a situation where customers are experiencing occasional failures to carry out operations on a FTP connection. The actual connection has been made to the server but when the server attempts to return data on the data port if fails. Observations We have a simple test FTP client which is running two separate threads. Each thread is performing a recursive listing of files from a root directory. With the firewall running on the client machine the hang happens within a few minutes. If the firewall is turned off on the client machine, the test application seems to run correctly. This does point to a potential firewall issue. However, with the firewall on we can list files on our company FTP server without any issues. If the simple test FTP client runs a single thread then we do not experience any problems whether or not the firewall is turned on. We have another simple test FTP client which was running 4 threads (with each opening a new FTP connection, doing a directory listing and closing the FTP connection as fast as possible) overnight with the firewall turned off. With the firewall turned on it fails in a short space of time. The confusing thing is that if the test FTP client and the FTP server are run on the same machine the failure occurs even though the firewall is turned off. This means that the problem may not be firewall related. Any help with what this could be would be much appreciated. Thanks Ian

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • Debugging problems in Visual Studio 2005 - No source code available for the current location

    - by espais
    Hi all I've searched up and down Google for others with a similar problem, and while I can find the error I don't think that other people have the same base problem that I do. Basically, I had to create a project for a unit-testing environment in order to run this test suite. First, I add my original C file, compile, and then a test file (C++) is generated. I then exclude my original source from the project, include this test script (which includes the original source at the top), and then run. I can debug the test file fine, but when it jumps to the original C file I get the dreaded 'no source code available for the current location' error. Both files are located within the same location, and I compiled the original file without any issue. Anybody have any thoughts about this? Its driving me crazy!

    Read the article

  • What is the best retort to "premature optimization is the root of all evil"

    - by waffles
    Often I hear the sentiment ... "Why worry about performance, write slow code, get your product to market ... don't worry about performance. You can sort that out later" The culmination of this sentiment is: "... premature optimization is the root of all evil ... #winning" I was wondering, does anybody have a good retort to this one liner. Ideally an equally strong one liner that encompasses the reverse of this sentiment?

    Read the article

  • Configure IIS Web Site for alternate Port and receive Access Permission error

    - by Andrew J. Brehm
    When I configure IIS to run a Web site on Port 1414, I get the following error: --------------------------- Internet Information Services (IIS) Manager --------------------------- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) However, as according to netstat the port is not in use. Completely aside from IIS, I wrote a test program (just to open the port and test it): TcpListener tcpListener; tcpListener = new TcpListener(IPAddress.Any, port); try { tcpListener.Start(); Console.WriteLine("Press \"q\" key to quit."); ConsoleKeyInfo key; do { key = Console.ReadKey(); } while (key.KeyChar != 'q'); } catch (Exception ex) { Console.WriteLine(ex.Message); } tcpListener.Stop(); The result was an exception and the following ex.Message: An attempt was made to access a socket in a way forbidden by its access permissions The port was available but its "access permissions" are not allowing me access. This remains after several restarts. The port is not reserved or in use as far as I know and while IIS says it is in use, netstat and my test program say it is not and my test program receives the error that I am not allowed to access the port. The test program ran elevated. The IIS Site is running MQSeries, but the MQ listener also cannot start on port 1414 because of this issue. A quick search of my registry found nothing interesting for port 1414. What are socket access permissions and how can I correct mine to allow access?

    Read the article

  • C printf not working on ubuntu 13.10 terminal

    - by Blaze Tama
    First, im new in ubuntu so please bear with me. I need to create a C based program for a course in my university. I was using opensuse and its konsole when i was in the university's lab. So basically i need to install opensuse on my system or using a vmware. But i feel lazy to do that, so i tried to run it on my ubuntu instead of opensuse. However, no C code seems working on ubuntu's terminal. The compiling is success, but its not running, or at least the printf is not running. This is my code, a very very simple one : #include<stdio.h> #include<unistd.h> #include<stdlib.h> int main() { printf("test"); return 0; } When i compile it with gcc test.c -o test everything work fine and i get an executeable file. Then i try to run it by ./test, but the printf is not printed. No error or warning was shown. Am i missing something? Note : my gcc is the new one, it should has no problem. Thanks for your help :D

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • CMT Blog: Virtual IO gets better for LDoms

    - by uwes
    As we all know virtual IO is of great use in the IT environments of today but when it comes to performance we often have to pay the price. In his Blog entry Improved vDisk Performance for Ldoms, Stefan Hinker explained how the new implementation of the vdisk/vds software stack in Solaris 11.1 SRU 19 (and a Solaris 10 patch sortly afterwards) significantly improves latency and throughput of the vitual disk IO.

    Read the article

  • EPM 11.1.2.2.000 release - considerations

    - by THE
    (guest Article by Nancy) Please be aware with the upcoming release of EPM v11.1.2.2.000, it is highly recommended you first read the"ORACLE® ENTERPRISE PERFORMANCE MANAGEMENT SYSTEM 11.1.2.2.000 Readme" prior to installing this release. We want to highlight the "Installation Information" section which includes the following late-breaking information: Business Rules Migration to Calculation Manager Oracle Hyperion Calculation Manager has replaced Oracle Hyperion Business Rules as the mechanism for designing and managing business rules, therefore, Business Rules is no longer released with EPM System Release 11.1.2.2. If you are applying 11.1.2.2 as a maintenance release, or upgrading to Release 11.1.2.2, and have been using Business Rules in an earlier release, you must migrate to Calculation Manager rules in Release 11.1.2.2. (See Oracle Enterprise Performance Management System Installation and Configuration Guide.) Planning User Interface Enhancements This release of Planning includes a large number of user interface enhancements, as described in Oracle Hyperion Planning New Features. To optimize performance with these new features, you must implement the following recommended configuration. Server: 64-bit, 16 GB physical RAM Client: Optimized for Internet Explorer 9 and Firefox 10 or higher Client-to-Server Connectivity: High-speed internet connection or VPN connection between client and server, client-to-server ping time < 150 milliseconds for best performance The new, improved Planning user interface requires efficient browsers to handle interactivity provided through Web 2.0 like functionality. In our testing, Internet Explorer 7, Internet Explorer 8, and Firefox 3.x are not sufficient to handle such interactivity, and the responsiveness in these versions of browsers is not as fast as the user interface in the previous releases of Planning. For this reason, we strongly recommend that you upgrade your browser to Internet Explorer 9 or Firefox 10 to get responsiveness similar to what you experienced in previous releases. In some instances, the response times in Internet Explorer 7, Internet Explorer 8 and Firefox 3.x could be acceptable. Hence, we suggest that you uptake the new user interface only after you conduct an end user response test and you are satisfied with the results of these tests for these versions of browsers. Please note that it is still possible to leverage the old user interface and features from Planning Release 11.1.2.1. (For more information, see “Using the Planning Release 11.1.2.1 User Interface and Features” in the Oracle Hyperion Planning Administrator's Guide.) IBM HTTP Server and IIS Default Ports Both IBM HTTP Server and IIS Web Server use 80 as their default port. If you are using WebSphere, you must change one of these defaults so that there is no port conflict. If you have further questions, please utilize the  Planning or Essbase MOS Community.

    Read the article

  • links for 2010-06-11

    - by Bob Rhubart
    The Web-Service May Not be the Bottleneck! | Shopzilla Tech Blog The web-service doesn't perform We have a performance problem we're investigating Sound familiar? Is the web-service under test really the performance (tags: ping.fm cloud soa oracle coherence virtualization)

    Read the article

  • (Ubuntu - Lamp) phpmyadmin install error 404 not found

    - by meyosef
    Hi, I install apache, mysql, php and then phpmyadmin on ubuntu desktop 10.04 version I test apache: localhost with browser and its works. I test mysql with console and its works. I test phpmyadmin http://localhost/phpmyadmin/ and get error 404 not found I see that solution for that brobem, copy /phpmyadmin/ to /www/, but I dont want to put in /www/ to my websites files (maybe I wrong or miss something-please tell me in this case). How this problem can be solved in the best way? Thanks

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Getting Started with Columnstore Index in SQL Server 2014 – Part 1

    Column Store Index, which improves performance of data warehouse queries several folds, was first introduced in SQL Server 2012. In this article series Arshad Ali talks about how you can get started with using enhanced columnstore index features in SQL Server 2014 and do some performance tests to understand the benefits. Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • SQL Saturday #323 - Paris

    On September 13, 2014 the French SQL Server Community (GUSS) will be holding a SQL Saturday conference. The event is free to attend, with 4 paid-for pre-conference sessions available. Register while space is available. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • su not giving proper message for restricted LDAP groups

    - by user1743881
    I have configured PAM authentication on Linux box to restrict particular group only to login. I have enabled pam and ldap through authconfig and modified access.conf like below, [root@test root]# tail -1 /etc/security/access.conf - : ALL EXCEPT root test-auth : ALL Also modified sudoers file, to get su for this group <code> [root@test ~]# tail -1 /etc/sudoers %test-auth ALL=/bin/su</code> Now, only this ldap group members can login to system. However when from any of this authorized user, I tried for su, it asks for password and then though I enter correct password it gives message like Incorrect password and login failed. /var/log/secure shows that user is not having permission to get the access, but then it should print message like Access denied.The way it prints for console login. My functionality is working but its no giving proper messages. Could anyone please help on this. My /etc/pam.d/su file, [root@test root]# cat /etc/pam.d/su #%PAM-1.0 auth sufficient pam_rootok.so # Uncomment the following line to implicitly trust users in the "wheel" group. #auth sufficient pam_wheel.so trust use_uid # Uncomment the following line to require a user to be in the "wheel" group. #auth required pam_wheel.so use_uid auth include system-auth account sufficient pam_succeed_if.so uid = 0 use_uid quiet account include system-auth password include system-auth session include system-auth session optional pam_xauth.so

    Read the article

  • How do I separate codes with classes?

    - by Trycon
    I have this main class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class tests extends BasicGameState{ public boolean render=false; tests1 test = new tests1(); public tests(int test) { // TODO Auto-generated constructor stub } @Override public void init(GameContainer arg0, StateBasedGame arg1) throws SlickException { // TODO Auto-generated method stub } @Override public void render(GameContainer arg0, StateBasedGame arg1, Graphics g) throws SlickException { // TODO Auto-generated method stub if(render==true) { g.drawString("Hello",100,100); } } @Override public void update(GameContainer gc, StateBasedGame s, int delta) throws SlickException { // TODO Auto-generated method stub test.render=render; test.update(gc, s, delta); } @Override public int getID() { // TODO Auto-generated method stub return 1000; } } and its sub-class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class tests1 { public boolean render; public void update(GameContainer gc, StateBasedGame s, int delta) { Input input = gc.getInput(); if(input.isKeyPressed(Input.KEY_X)) { render=true; } } } I was finding a way to prevent many codes in one class. I'm new to java. When I try running my game, then when I press X, it does not work. How am I suppose to fix that?

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >