Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1552/1620 | < Previous Page | 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559  | Next Page >

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • Microsoft Office 2003 document (Excel and Word) intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during document load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All samples documents are located on a local drive (C:\BPI); The no document has has macros and have any addins usage; The problem does occurs on others files extensions like .PDF, for example; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different documents (.xls and .doc), all of them smaller than 30 KB; I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbook or word documents; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • System Reserved partition no longer marked as System

    - by Mark
    I recently posted a question to Super User about accidentally marking my external HDD's partition as Active and how I could undo my accidental mistake. I followed the instructions provided and they worked fine. This involved some command line magic and from what I understand, I did not have to really do this, but I just wanted to get things back to how they were originally. After making the fix things went back to normal in disk management. After I restarted my computer though i had an issue: BOOTMGR is missing Press Ctrl+Alt+Del to restart Rugh roh! I brought my laptop to work so I could search for a solution on my work computer and I found a nice guide on fixing the issue. To summarize the instructions, I had to reboot with my Windows 7 install disc and click the Repair button. Once there I could then repair the start-up options. One of the commenters on the site claimed you need to do this twice, as the first time the "repair" doesn't actually fix it. I found this to be true as well. I tried to repair it and it did some work, then rebooted. I then got the same error again. I booted from the CD again and repaired the start-up options then after this second time Windows started to boot up. Before the restart I got a nice info window telling me that it did make repairs to the boot info (this was promising). I've been using Windows 7 for a few days now with no problem, but I just recently noticed that I now can see the System Reserved partition in Computer: (click for full size) I immediately went to disk management to see what was up. I noticed that my System Reserved partition is no longer marked as System and instead I believe the repair operation made my C: drive the system partition. I'm not fully aware of what the System partition really is but I briefly read that its a Windows 7 thing that gets created on install of Win7 that writes some BitLocker encryption stuff to a isolated partition as well as some boot files. (click for full size) How can I undo this and make the System Partition marked as System instead of my OS C: partition? How can I make it so that I don't see this partition in Computer (I believe fixing #1 will fix this) What are the implications of what the current state is and the fact that I can now browse into this new partition? Thanks in advance.

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 7

    - by Max
    Download this article as a PDF Welcome back :) This week we are going to look at something more exciting and a much required feature for any twitter client – auto refresh so as to show new status updates. We are going to achieve this using Silverlight 4 Timers and a bit and refresh our datagrid every 2 minutes to show new updates. We will do this so that we do only minimal request to the twitter api, so that twitter does not block us – there is a limit of 150 request an hour. Let us get started now. Also we will get the profile user id hyperlinked, so that when ever the user click on it, we will take them to their twitter page. Also it was a pain to always run this application by pressing F5, then it would open in a browser you would have to right click uninstall and install it again to see any changes. All this and yet we were not able to debug it :( Now there is a solution for this to run a silverlight application directly out of browser and yet have the debug feature. Super cool, here is how. Right on the Silverlight project and go to debug and then select the Out-Of-Browser application option and choose the *.Web project. Then just right click on the SL project and set as Startup Project. There you go, now every time you press F5, it will automatically run out of browser and still have the debug options. I go to know about this after some binging. Now let us jump to the core straight away. 1) To get the user id hyperlinked, we need to have a DataGridTemplateColumn and within that have a HyperLinkButton. The code for this will  be <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <HyperlinkButton Click="HyperlinkButton_Click" Content="{Binding UserName}" TargetName="_blank" ></HyperlinkButton> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> 2) Now let us look at how we are getting this done by looking into HyperlinkButton_Click event handler. There we will dynamically set the NavigateUri to the twitter page. I tried to do this using some binding, eval like stuff as in ASP.NET, but no luck! private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { HyperlinkButton hb = (HyperlinkButton)e.OriginalSource; hb.NavigateUri = new Uri("http://twitter.com/" + hb.Content.ToString(), UriKind.Absolute); } 3) Now we need to switch on our Timer right in the OnNavigated to event on our SL page. So we need to modify our OnNavigated event to some thing like below: protected override void OnNavigatedTo(NavigationEventArgs e) { image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); this.Title = GlobalVariable.getUserName() + " - Home"; if (!GlobalVariable.isLoggedin()) this.NavigationService.Navigate(new Uri("/Login", UriKind.Relative)); else { currentGrid = "Timeline-Grid"; TwitterCredentialsSubmit(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 60, 0); myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } } I use a global string – here it is currentGrid variable to indicate what is bound in the datagrid so that after every timer tick, I can rebind the latest data to it again. Like I will only rebind the friends timeline again if the data grid currently holds it and I’ll only rebind the respective list status again in the data grid, if already a list status is bound to the data grid. In the above timer code, its set to trigger the Each_Tick event handler every 1 minute (60 seconds). TimeSpan takes in (days, hours, minutes, seconds, milliseconds). 4) Now we need to set the list name in the currentGrid variable when a list button is clicked. So add the code line below to the list button event handler currentGrid = currentList = b.Content.ToString(); 5) Now let us see how Each_Tick event handler is implemented. public void Each_Tick(object o, EventArgs sender) { if (!currentGrid.Equals("Timeline-Grid")) getListStatuses(currentGrid); else { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } } If the data grid hold friends timeline, I just use the same bit of code we had already to bind the friends timeline to the data grid. Copy Paste. But if it is some list timeline that is bound in the datagrid, I then call the getListStatus method with the currentGrid string which will actually be holding the list name. 6) I wanted to make the hyperlinks inside the status message as hyperlinks and when the user clicks on it, we can then open that link. I tried using a convertor and using a regex to recognize a url and wrap it up with a href, but that is not gonna work in silverlight textblock :( Anyways that convertor code is in the zip file. 7) You can get the complete project files from here. 8) Please comment below for your doubts, suggestions, improvements. I will try to reply as early as possible. Thanks for all your support. Technorati Tags: Silverlight 4,Datagrid,Twitter API,Silverlight Timer

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • ASP.NET Multi-Select Radio Buttons

    - by Ajarn Mark Caldwell
    “HERESY!” you say, “Radio buttons are for single-select items!  If you want multi-select, use checkboxes!”  Well, I would agree, and that is why I consider this a significant bug that ASP.NET developers need to be aware of.  Here’s the situation. If you use ASP:RadioButton controls on your WebForm, then you know that in order to get them to behave properly, that is, to define a group in which only one of them can be selected by the user, you use the Group attribute and set the same value on each one.  For example: 1: <asp:RadioButton runat="server" ID="rdo1" Group="GroupName" checked="true" /> 2: <asp:RadioButton runat="server" ID="rdo2" Group="GroupName" /> With this configuration, the controls will render to the browser as HTML Input / Type=radio tags and when the user selects one, the browser will automatically deselect the other one so that only one can be selected (checked) at any time. BUT, if you user server-side code to manipulate the Checked attribute of these controls, it is possible to set them both to believe that they are checked. 1: rdo2.Checked = true; // Does NOT change the Checked attribute of rdo1 to be false. As long as you remain in server-side code, the system will believe that both radio buttons are checked (you can verify this in the debugger).  Therefore, if you later have code that looks like this 1: if (rdo1.Checked) 2: { 3: DoSomething1(); 4: } 5: else 6: { 7: DoSomethingElse(); 8: } then it will always evaluate the condition to be true and take the first action.  The good news is that if you return to the client with multiple radio buttons checked, the browser tries to clean that up for you and make only one of them really checked.  It turns out that the last one on the screen wins, so in this case, you will in fact end up with rdo2 as checked, and if you then make a trip to the server to run the code above, it will appear to be working properly.  However, if your page initializes with rdo2 checked and in code you set rdo1 to checked also, then when you go back to the client, rdo2 will remain checked, again because it is the last one and the last one checked “wins”. And this gets even uglier if you ever set these radio buttons to be disabled.  In that case, although the client browser renders the radio buttons as though only one of them is checked the system actually retains the value of both of them as checked, and your next trip to the server will really frustrate you because the browser showed rdo2 as checked, but your DoSomething1() routine keeps getting executed. The following is sample code you can put into any WebForm to test this yourself. 1: <body> 2: <form id="form1" runat="server"> 3: <h1>Radio Button Test</h1> 4: <hr /> 5: <asp:Button runat="server" ID="cmdBlankPostback" Text="Blank Postback" /> 6: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7: <asp:Button runat="server" ID="cmdEnable" Text="Enable All" OnClick="cmdEnable_Click" /> 8: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9: <asp:Button runat="server" ID="cmdDisable" Text="Disable All" OnClick="cmdDisable_Click" /> 10: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 11: <asp:Button runat="server" ID="cmdTest" Text="Test" OnClick="cmdTest_Click" /> 12: <br /><br /><br /> 13: <asp:RadioButton ID="rdoG1R1" GroupName="Group1" runat="server" Text="Group 1 Radio 1" Checked="true" /><br /> 14: <asp:RadioButton ID="rdoG1R2" GroupName="Group1" runat="server" Text="Group 1 Radio 2" /><br /> 15: <asp:RadioButton ID="rdoG1R3" GroupName="Group1" runat="server" Text="Group 1 Radio 3" /><br /> 16: <hr /> 17: <asp:RadioButton ID="rdoG2R1" GroupName="Group2" runat="server" Text="Group 2 Radio 1" /><br /> 18: <asp:RadioButton ID="rdoG2R2" GroupName="Group2" runat="server" Text="Group 2 Radio 2" Checked="true" /><br /> 19:  20: </form> 21: </body> 1: protected void Page_Load(object sender, EventArgs e) 2: { 3:  4: } 5:  6: protected void cmdEnable_Click(object sender, EventArgs e) 7: { 8: rdoG1R1.Enabled = true; 9: rdoG1R2.Enabled = true; 10: rdoG1R3.Enabled = true; 11: rdoG2R1.Enabled = true; 12: rdoG2R2.Enabled = true; 13: } 14:  15: protected void cmdDisable_Click(object sender, EventArgs e) 16: { 17: rdoG1R1.Enabled = false; 18: rdoG1R2.Enabled = false; 19: rdoG1R3.Enabled = false; 20: rdoG2R1.Enabled = false; 21: rdoG2R2.Enabled = false; 22: } 23:  24: protected void cmdTest_Click(object sender, EventArgs e) 25: { 26: rdoG1R2.Checked = true; 27: rdoG2R1.Checked = true; 28: } 29: 30: protected void Page_PreRender(object sender, EventArgs e) 31: { 32:  33: } After you copy the markup and page-behind code into the appropriate files.  I recommend you set a breakpoint on Page_Load as well as cmdTest_Click, and add each of the radio button controls to the Watch list so that you can walk through the code and see exactly what is happening.  Use the Blank Postback button to cause a postback to the server so you can inspect things without making any changes. The moral of the story is: if you do server-side manipulation of the Checked status of RadioButton controls, then you need to set ALL of the controls in a group whenever you want to change one.

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Transient network dropout for Xen DomU's

    - by Stephen C
    We've got a CentOS server running a cluster of virtuals. Occasionally the cluster's internal network drops out for a minute or so ... and then comes back. The problem is somehow related to the actual network traffic, but it is not a simple load issue. (The system is generally lightly loaded, and the problem occurs irrespective of actual load.) The setup: CentOS 5.6 on Dom0, various CentOS on the DomU's Hardware - a Dell R710 with a BroadCom NextXpress 2 NIC (sigh) using the latest drivers for the NIC from BroadCom Xen configured to use network-bridge and vif-bridge Some iptable tweaks to route an unrelated port to one of the virtuals. The system has one externally visible IP address, and Dom0 runs an Apache httpd configured with a number of virtual hosts each of which reverse proxies to web servers running on the virtuals. (The virtuals have to be NAT'ed, primarily because we don't have enough allocated public IP addresses.) The symptoms: Works fine most of the time. When someone tries to UPLOAD a large file to one virtuals, the internal network drops out ... for all virtuals: The Dom0 httpd sees a network timeout talking to the backend server on the virtual and reports a 502. A previously established ssh connection from Dom0 to any of the DomU's freezes. Our monitoring shows ping failures for traffic between virtuals. The Xen consoles to the DomU's do not freeze. No log messages in any log files that I can see, on either Dom0 or the DomU's ... apart from the Dom0 httpd logs. After a minute or so, the problem clears by itself. This is 100% reproducible. What we've tried: Downloading, building and installing the latest BNX2 driver on Dom0 Turning off MSI on the NIC - adding "options bnx2 disable_msi=1" to /etc/modprobe.conf Turning off tcp segmentation offload - "ethtool -K eth0 tso off". Sacrificing a black rooster at midnight. I've exhausted all my options apart from switching to KVM ... or slaughtering more roosters. Any suggestions?

    Read the article

  • Active Directory Corrupted In Windows Small Business Server 2011 - Server No Longer Domain Controller

    - by ThinkerIV
    I have a rather bad problem with my Windows SBS 2011. First of all, I'll give the background to what caused the problem. I was setting up a new small business server network. I had my job about finished. The server was working great, all the workstations had joined the domain, and I had all my applications and data moved to the server. I thought I was done. But then it happened. I tried adding one more computer to the domain, and to my dismay the computer name was set to the same name as the server. Apparently when a computer joins a domain with the same name as another machine that is already on the domain, it overrides the first one. For normal workstations, this is not a big deal, you just delete the computer from AD and rejoin the original computer to the domain. However, for a server that is the domain controller it is a whole different story. Since the server got overridden in AD, it is no longer the domain controller. The DNS service is not working and all kinds of other services are failing also. So the question is, what are my options? I am embarrassed to admit it, but since this is a new server one thing I did not have setup yet was backup. So I have no backups to work from. I am worried that things are broken enough that I might need to do a reinstall. However, I already have several days worth of configuration into this server, so I would obviously prefer if there was a fix that would prevent me from needing to do a reinstall. All the server components are there and installed correctly, but they are misconfigured (I think it is basically just Active Directory). So I have the feeling that if I did the right thing I could solve the issue without a reinstall. Is there anyway to rerun the component that installs the initial configuration to "convert" the base windows server 2008 r2 install into a SBS? In other words in the program files folder there is an application called SBSsetup.exe, is there anyway to rerun this and have it reconfigure AD, etc. to work with SBS? Any insight will be greatly appreciated. Thanks.

    Read the article

  • Bypassing "Found New Hardware Wizard" / Setting Windows to Install Drivers Automatically

    - by Synetech inc.
    Hi, My motherboard finally died after the better part of a decade, so I bought a used system. I put my old hard-drive and sound-card in the new system, and connected my old keyboard and mouse (the rest of the components—CPU, RAM, mobo, video card—are from the new system). I knew beforehand that it would be a challenge to get Windows to boot and install drivers for the new hardware (particularly since the foundational components are new), but I am completely unable to even attempt to get through the work of installing drivers for things like the video card because the keyboard and mouse won't work (they do work, in the BIOS screen, in DOS mode, in Windows 7, in XP's boot menu, etc., just not in Windows XP itself). Whenever I try to boot XP (in normal or safe mode), I get a bunch of balloons popping up for all the new hardware detected, and a New Hardware Found Wizard for Processor (obviously it has to install drivers for the lowest-level components on up). Unfortunately I cannot click Next since the keyboard and mouse won't work yet because the motherboard drivers (for the PS/2 or USB ports) are not yet installed. I even tried a serial mouse, but to no avail—again, it does work in DOS, 7, etc., but not XP because it doesn't have the serial port driver installed. I tried mounting the SOFTWARE and SYSTEM hives under Windows 7 in order to manually set the "unsigned drivers warning" to ignore (using both of the driver-signing policy settings that I found references to). That didn't work; I still get the wizard. They are not even fancy, proprietary, third-party, or unsigned drivers. They are drivers that come with Windows—as the drivers for CPU, RAM, IDE controller, etc. tend to be. And the keyboard and mouse drivers are the generic ones at that (but like I said, those are irrelevant since the drivers for the ports that they are connected to are not yet installed). Obviously at some point in time over the past several years, a setting got changed to make Windows always prompt me when it detects new hardware. (It was also configured to show the Shutdown Event Tracker on abnormal shutdowns, so I had to turn that off so that I could even see the desktop.) Oh, and I tried deleting all of the PNF files so that they get regenerated, but that too did not help. Does anyone know how I can reset Windows to at least try to automatically install drivers for new hardware before prompting me if it fails? Conversely, does anyone know how exactly one turns off automatic driver installation (and prompt with the wizard)? Thanks a lot.

    Read the article

  • Postfix + SASLAUTHD + MySQL authentication problems

    - by Or W
    I've been trying to sort this out for the past 6 hours or so, this is the error message I'm facing (Running CentOS x64): /var/log/maillog: Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: SASL authentication failure: Password verification failed Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL PLAIN authentication failed: authentication failure Jun 22 20:42:49 ptroa postfix/smtpd[10130]: warning: bzq-79-177-192-133.red.bezeqint.net[79.177.192.133]: SASL LOGIN authentication failed: authentication failure /var/log/messages: Jun 22 20:15:38 ptroa saslauthd[9401]: do_auth : auth failure: [user=myuser] [service=smtp] [realm=domain.com] [mech=pam] [reason=PAM auth error] I have dovecot installed as well and I'm able to receive emails via the MySQL authentication. The problem is when I'm trying to use SMTP to send out emails. Some config files: /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = Server Message biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination virtual_create_maildirsize = yes virtual_maildir_extended = yes proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_cano$ virtual_transport = dovecot dovecot_destination_recipient_limit = 1 /etc/default/saslauthd: START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" /etc/pam.d/smtp: #%PAM-1.0 #auth include password-auth #account include password-auth auth required pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1 account sufficient pam_mysql.so user=mail_admin passwd=password host=127.0.0.1 db=mail table=users usercolumn=email passwdcolumn=password crypt=1 verbose=1

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to create NTFS partition in Linux to install Windows 7 from USB?

    - by Michal Stefanow
    I messed up with my computer and need help. Generally: install Windows 7 from USB. Problem: "setup was unable to create a new system partition" When first attempt to install Windows 7 failed I tried Linux live USB, installed distro to HDD, and erased all the existing partitions. Current state (fdisk -l): [writing from other computer so no copy and paste] /dev/sda1 305GB Linux /dev/sda2 7GB Extended /dev/sda5 7GB Linux Swam / Solaris To create a new, NTFS partition: fdisk /dev/sda n (for new) p (for primary) 3 (for partintion number) "No free sectors available" All the HDD was formatted couple of minutes before so there is a lot of free space but how to resize a parition? I cannot find an option for resizing in man fdisk. Some people say I should use gparted but my distro doesn't not contain this package. And my distro doesn't support wireless drivers so I have serious problems with downloading stuff. I tried also using cfdisk but any command results in: "cfdisk bad primary partition 1 partition ends in the final partial cylinder" I tried also removing partition 1 and then creating a new one (so there is no "no free sectors"). I'm receiving a warning: "Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot." After restating: "grub rescue, no known filesystem" It may indicate that some changes have been made BUT when running Windows 7 installed some another error: "Windows cannot be installed to Disk 0 Partition 1" More detailed: "Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS." So formatting drive using Windows 7 installer BUT this time yet another error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information" Apparently I cannot access logs (how?) and I am back to drawing board with my live USB (this time showing partition as HPFS/NTFS). Any suggestions how to install Windows 7? Should I reinstall Linux to HDD, erase existing partitions once again, and use Parted rather than gparted (parted is included in the distro). Or maybe should I create another bootable USB such as PartedMagic to painlessly create partitions? I just want to install Windows 7 from USB, my laptop is semi-operational and I am ready to receive some help regarding fdisk and creating NTFS partitions. UPDATE: I did as suggested (removed all the partitions) and tried to install in unallocated space. Tried to create a new partition and format it. Same error: "setup was unable to create a new system partition" Came to the conclusion it may have something to do with TrueCrypt I have recently installed. Right now trying to FIX MBR (as I haven't got possibility to create rescue disc without optical drive)

    Read the article

  • Is it possible to "stealth" dual boot a machine?

    - by BrianH
    I have a loaner laptop that has MS Windows with locked down permissions. It works okay for what I need to do, but I started wondering if there was a way to install a separate Windows OS on a separate hard drive to do what I want to do on it. Virtual I wish I could use VirtualBox or VMWare, but that is not an option (I even tried VBox portable). External Drive My next trial was see if it was possible to install Windows on an external drive, and then plug that drive in and boot from it whenever I wanted my own OS. After a few Google searches, I see that is not really a possibility. Swap Primary Drive Another option, would be to get a second internal hard drive, take the existing HD out, and install a new Windows OS on the secondary HD. This would mean swapping the internal hard drive each time I want to switch OSs - doable, but not very convenient. Dual Boot The laptop has an expansion slot where a second hard drive can be plugged in quickly. I thought about Dual booting, but I don't want to mess with the MBR on the primary hard drive. When I have to give the laptop back, I don't want a dual-boot screen to popup. Summary Is there a way to have 2 hard-drives on a machine, each with it's own OS, and maybe use BIOS settings to have only 1 hard drive active at a time? That way both hard drives could be physically connected, but only one would actually be active at a time. I basically want a second OS that does not (can not) affect the existing OS in any way, and can be removed at any time without affecting the existing OS. The secondary OS does not need any of the files on the main hard drive - it's basically like having 2 separate computers using the same hard ware... Is this possible, or would it be easier just to go out and buy a different laptop? Thanks in advance! EDIT I just discovered that my BIOS allows me to pick (at startup) which hard drive I want to boot from. I poked around in the BIOS and there is not a place to disable certain devices, like the primary hard drive. My only concern about plugging in a second hard drive and installing Windows to the second hard drive is that it will mess with the primary hard drive, or add a bootloader screen to pick which windows install to use. My thought would be to physically unplug the primary, plug in the secondary and install windows to the secondary. After the install is working properly, I can plug the primary back in and use the BIOS feature to determine which drive to boot to. Is there any way after I have 2 separate installs on 2 separate hard drives that one of the installs could mess with the MBR on the other drive?

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • unable to install anything on ubuntu 9.10 with aptitude

    - by Srisa
    Hello, Earlier i could install software by using the 'sudo aptitude install ' command. Today when i tried to install rkhunter i am getting errors. It is not just rkhunter, i am not able to install anything. Here is the text output: user@server:~$ sudo aptitude install rkhunter ................ ................ 20% [3 rkhunter 947/271kB 0%] Get:4 http://archive.ubuntu.com karmic/universe unhide 20080519-4 [832kB] 40% [4 unhide 2955/832kB 0%] 100% [Working] Fetched 1394kB in 1s (825kB/s) Preconfiguring packages ... Selecting previously deselected package lsof. (Reading database ... ................ (Reading database ... 95% (Reading database ... 100% (Reading database ... 20076 files and directories currently installed.) Unpacking lsof (from .../lsof_4.81.dfsg.1-1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb (--unpack): unable to create `/usr/bin/lsof.dpkg-new' (while processing `./usr/bin/lsof'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package libmd5-perl. Unpacking libmd5-perl (from .../libmd5-perl_2.03-1_all.deb) ... Selecting previously deselected package rkhunter. Unpacking rkhunter (from .../rkhunter_1.3.4-5_all.deb) ... dpkg: error processing /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb (--unpack): unable to create `/usr/bin/rkhunter.dpkg-new' (while processing `./usr/bin/rkhunter'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package unhide. Unpacking unhide (from .../unhide_20080519-4_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/unhide_20080519-4_amd64.deb (--unpack): unable to create `/usr/sbin/unhide-posix.dpkg-new' (while processing `./usr/sbin/unhide-posix'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb /var/cache/apt/archives/unhide_20080519-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up libmd5-perl (2.03-1) ... Building dependency tree... 0% Building dependency tree... 50% Building dependency tree... 50% Building dependency tree Reading state information... 0% ........... .................... I have removed some lines to reduce the text. All the error messages are in here though. My experience with linux is limited and i am not sure what the problem is or how it is to be resolved. Thanks.

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

< Previous Page | 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559  | Next Page >