Daily Archives

Articles indexed Monday November 4 2013

Page 4/18 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to sort the items in listbox alphabetically?

    - by user2745378
    i need to sort the items alphabetically in listbox when sort button is clicked. (I have sort button in appbar). But I dunno how to achieve this. here is XAML. All help will be much appreciated. <phone:PhoneApplicationPage.Resources> <DataTemplate x:Key="ProjectTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="400" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="1" Text="{Binding Name}" Style="{StaticResource PhoneTextLargeStyle}" /> </Grid> </DataTemplate> </phone:PhoneApplicationPage.Resources> <Grid x:Name="ContentPanel" Grid.Row="1" Margin="0,0,12,0"> <ListBox x:Name="projectList" ItemsSource="{Binding Items}" SelectionChanged="ListBox_SelectionChanged" ItemTemplate="{StaticResource ProjectTemplate}" /> </Grid> Here's my ViewModel namespace PhoneApp.ViewModels { public class ProjectsViewModel: ItemsViewModelBase<Project> { public ProjectsViewModel(TaskDataContext taskDB) : base(taskDB) { } public override void LoadData() { base.LoadData(); var projectsInDB = _taskDB.Projects.ToList(); Items = new ObservableCollection<Project>(projectsInDB); } public override void AddItem(Project item) { _taskDB.Projects.InsertOnSubmit(item); _taskDB.SubmitChanges(); Items.Add(item); } public override void RemoveItem(int id) { var projects = from p in Items where p.Id == id select p; var item = projects.FirstOrDefault(); if (item != null) { var tasks = (from t in App.TasksViewModel.Items where t.ProjectId == item.Id select t).ToList(); foreach (var task in tasks) App.TasksViewModel.RemoveItem(task.Id); Items.Remove(item); _taskDB.Projects.DeleteOnSubmit(item); _taskDB.SubmitChanges(); } } } } I have added the ViewModel C# Code herewith

    Read the article

  • How to have game audio loop at a certain point

    - by Essential
    I have a storm in my game, and so I've made an ambient audio file which slowly grows into a storm and rain fades in, which then becomes a loopable storm audio file. Here is how I've done it: // Play intro clip and merge into main loop var introTime = stormIntro.length; AudioSource.PlayClipAtPoint( stormIntro, Vector3.zero, 0.7 ); Invoke( "StormMusic", introTime ); The way I'm currently trying to do it is get the length of the storm_intro audio clip, play the clip, and then invoke storm_loop to begin after the length of the intro has completed. This kinda works, but not really because there's occasionally a gap between the two. So how can I do it so the transition is seamless?

    Read the article

  • Understanding the workflow of the messages in a generic server implementation in Erlang

    - by Chiron
    The following code is from "Programming Erlang, 2nd Edition". It is an example of how to implement a generic server in Erlang. -module(server1). -export([start/2, rpc/2]). start(Name, Mod) -> register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)). rpc(Name, Request) -> Name ! {self(), Request}, receive {Name, Response} -> Response end. loop(Name, Mod, State) -> receive {From, Request} -> {Response, State1} = Mod:handle(Request, State), From ! {Name, Response}, loop(Name, Mod, State1) end. -module(name_server). -export([init/0, add/2, find/1, handle/2]). -import(server1, [rpc/2]). %% client routines add(Name, Place) -> rpc(name_server, {add, Name, Place}). find(Name) -> rpc(name_server, {find, Name}). %% callback routines init() -> dict:new(). handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}. server1:start(name_server, name_server). name_server:add(joe, "at home"). name_server:find(joe). I tried so hard to understand the workflow of the messages. Would you please help me to understand the workflow of this server implementation during the executing of the functions: server1:start, name_server:add and name_server:find?

    Read the article

  • deactivate ' pin to start ' on Application List page when pinning an app via code using C#?

    - by Ahmed Ali
    i'm creating a windows phone app ,where i've put a button to pin the app to start screen , but when press and hold the app icon on application list screen i find that the pin to start option can be used ShellTile TileToFind = ShellTile.ActiveTiles.FirstOrDefault(x => x.NavigationUri.ToString().Contains("MainPage.xaml")); // Create the Tile if we didn't find that it already exists. if (TileToFind == null) { // Create the Tile object and set some initial properties for the Tile. // The Count value of 12 shows the number 12 on the front of the Tile. Valid values are 1-99. // A Count value of 0 indicates that the Count should not be displayed. StandardTileData NewTileData = new StandardTileData { BackgroundImage = new Uri("300.png", UriKind.Relative), Title = "apptitle", BackTitle = "title", BackContent = "testing ", BackBackgroundImage = null }; // Create the Tile and pin it to Start. This will cause a navigation to Start and a deactivation of our app. ShellTile.Create(new Uri("/MainPage.xaml", UriKind.Relative), NewTileData); } else { MessageBox.Show("Already Pinned"); } how can i disable the user from pinning the application again from application list screen

    Read the article

  • not able to draw image on canvas of surface view in Android

    - by Fayaz Ali
    I am drawing an image using drawbitmap method on a canvas of surface view which is an overlay surface on my camera preview.The image drawn is a portion of captured image to guide the user to capture next image with a proper overlap.Now when I am launching the activity as the application start activity i.e it is my first activity,it works fine and draws the image.But when I launch the same activity from some other activity,the surface view is not show anything. Is there any difference between launching an activity from another activity and from the application launch. Anyone help here please!

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • WPF, notify a child in the element tree about an event in a parent

    - by jester
    I am developing a WPF app and I want an event in a parent to be notified to several of its children in the element tree, so that each of them can take an action accordingly. I know that a custom RoutedEvent can be used to signal in the other direction from a child to one of its ancestors by bubbling the event upwards, so that any of the ancestor elements can handle the event. What I want is the children to be notified about an event in the parent and they handle them appropriately. What is the best strategy to achieve this? EDIT: Clarifying the comments : Say I have a parent UserControl. It has a TabControl and its contents are several nested child UserControls. Now consider a scenario where I want the TabControl.SelectionChanged() event to cause some changes in each of the child UserControl. How to achieve this? (The contents of each tab is a UserControl which themselves may contain another few levels of children UserControls. I want the UserControl in the bottom level to know about the SelectionChanged() event and respond accordingly).

    Read the article

  • Merging XML records into one

    - by BhanuPratap Tarigopula
    I am new to XSLT. I have requirement of merging and adding. XML: <OrderDetails> <OrderDetail action="add"> <OrderedUnits>18</OrderedUnits> <Date>2013-09-30T00:00:00</Date> <LocationCode>3202</LocationCode> <PONumber>022548295755</PONumber> </OrderDetail> <OrderDetail action="add"> <OrderedUnits>12</OrderedUnits> <Date>2013-09-30T00:00:00</Date> <LocationCode>3202</LocationCode> <PONumber>022548295755</PONumber> </OrderDetail> <IOrderDetail action="add"> <OrderedUnits>18</OrderedUnits> <Date>2013-09-30T00:00:00</Date> <LocationCode>3202</LocationCode> <PONumber>022548295762</PONumber> </OrderDetail> <OrderDetails> If the LocationCode, Date, and PONumber fields match, I need to add the OrderedUnits and make it only one entry. Expected output XML: <OrderDetails> <OrderDetail action="add"> <OrderedUnits>30</OrderedUnits> <Date>2013-09-30T00:00:00</Date> <LocationCode>3202</LocationCode> <PONumber>022548295755</PONumber> </OrderDetail> <IOrderDetail action="add"> <OrderedUnits>18</OrderedUnits> <Date>2013-09-30T00:00:00</Date> <LocationCode>3202</LocationCode> <PONumber>022548295762</PONumber> </OrderDetail> <OrderDetails> How can I write this XSLT?

    Read the article

  • Custom model validation of dependent properties using Data Annotations

    - by Darin Dimitrov
    Since now I've used the excellent FluentValidation library to validate my model classes. In web applications I use it in conjunction with the jquery.validate plugin to perform client side validation as well. One drawback is that much of the validation logic is repeated on the client side and is no longer centralized at a single place. For this reason I'm looking for an alternative. There are many examples out there showing the usage of data annotations to perform model validation. It looks very promising. One thing I couldn't find out is how to validate a property that depends on another property value. Let's take for example the following model: public class Event { [Required] public DateTime? StartDate { get; set; } [Required] public DateTime? EndDate { get; set; } } I would like to ensure that EndDate is greater than StartDate. I could write a custom validation attribute extending ValidationAttribute in order to perform custom validation logic. Unfortunately I couldn't find a way to obtain the model instance: public class CustomValidationAttribute : ValidationAttribute { public override bool IsValid(object value) { // value represents the property value on which this attribute is applied // but how to obtain the object instance to which this property belongs? return true; } } I found that the CustomValidationAttribute seems to do the job because it has this ValidationContext property that contains the object instance being validated. Unfortunately this attribute has been added only in .NET 4.0. So my question is: can I achieve the same functionality in .NET 3.5 SP1? UPDATE: It seems that FluentValidation already supports clientside validation and metadata in ASP.NET MVC 2. Still it would be good to know though if data annotations could be used to validate dependent properties.

    Read the article

  • Don&rsquo;t use MySQL .net connector, here is why ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/donrsquot-use-mysql-.net-connector-here-is-why.aspxIf you use .net mysql connector and all project new or old use different different version of Mysql .net connector then you need to upgrade it to latest (if you don’t use copy local=true for bin assembly). This is not the single problem happen to me.   In my case I use .net connector 6.7.4.0 and let’s see what happen to me after I start using it. 6.7.4.0 install register the mysql module in machine.config and it’s broke every software you haven’t deployed with Mysql.   Suppose for example I just create a website ( in webmatrix 3) put my index.cshtml and now see what it preview for me. This means I need to add the mysql.Web even I don’t use any kind of database. I need to do every asp.net mvc project no matter they use mysql. it’s problematic when we use older .net  mysql connector in some of my project.   If you have trouble like this simply use nuget and say Bye bye to this trouble.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Register the &quot;OneCode &amp; OneScript&quot; session at MVP Global Summit November 2013

    - by Jialiang
    Originally posted on: http://geekswithblogs.net/Jialiang/archive/2013/11/04/register-the-quotonecode-amp-onescriptquot-session-at-mvp-global-summit.aspxThe yearly Microsoft MVP Global Summit will lift its curtain on Nov 17th in Bellevue, WA.  This year, we have prepared three new apps and many new samples in response to MVPs’ feedbacks last year.  If you are attending this year’s Microsoft MVP Global Summit, you will have the privilege to kiss or bite their development team   Sample Browser Windows Phone app – with 6000+ MSDN code samples which will be at your fingertips anytime and anywhere. Script Explorer for PowerShell ISE – with 8000+ script sample which will be at your fingertips when you are writing scripts in PowerShell ISE. PowerShell checkin policy for TFS – automatically checks your PowerShell script code against best practices of PowerShell. Interested?  Please open your Schedule Builder for the MVP Summit 2013, and register for the event called “OneCode & OneScript” on Nov 17th.  We look forward to seeing you and learning your feedback.

    Read the article

  • How to add a second domain to an EC2 instance with Elastic & Route 53

    - by memeLab
    I've got my domain site.com running on EC2, using Elastic IP and Route 53. I want to park site.net so that it resolves to the same site.. I've looked up Migrating an Existing Domain to Route 53 in the docs, but can't find mention of how to add a second domain! I figured I'd have to create an A record, but when I do so, the record is created site.net.site.com .. not quite what I'm after! I've also done searches for mixes of 'route 53', 'park domain', 'addon domain', 'second domain', but no dice... My prior experience is with cPanel and Plesk, so I'm a bit lost! Any pointers would be appreciated! TIA

    Read the article

  • Installing old version of mysql

    - by Peter
    I'm trying to troubleshoot a database import problem and want to duplicate the environment onto another server. This will require installing an older version of mysql, but the packages that are listed are only showing a recent version. I'm currently running debian wheezy 7.1 and what was installed was the packaged 5.5.31. What is the official way to install an older copy? I guess I could hunt around Google and hope to find some files of the same version to install from source, but this doesn't seem like a reliable method.

    Read the article

  • Can't seem to stop Postfix backscatter

    - by Ian
    I've just migrated to a Postfix system and can't seem to stop the backscatter messages to unknown addresses on the site. I have a file, validrcpt, that lists all the valid emails on the site - about eight of them. Yet when a message is sent to a non-existent address, instead of just dropping it, postfix is replying with a "Recipient address rejected: User unknown in virtual mailbox table" email. Do I have something set wrong? I've read http://www.postfix.org/BACKSCATTER_README.html but unless I'm caffeine deficient, I don't see what's happening and perhaps I'm just to used to my old qmail setup. Here's postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 local_recipient_maps = hash:/etc/postfix/validrcpt mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}" mailbox_size_limit = 0 mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname policy-spf_time_limit = 3600s readme_directory = no recipient_bcc_maps = hash:/etc/postfix/recipient_bcc recipient_delimiter = + relay_recipient_maps = hash:/etc/postfix/relay_recipients relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination,check_policy_service unix:private/policy-spf,reject_rbl_client zen.spamhaus.org,reject_rbl_client bl.spamcop.net,reject_rbl_client cbl.abuseat.org,check_policy_service inet:127.0.0.1:10023 smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/dovecot/dovecot.pem smtpd_tls_key_file = /etc/dovecot/private/dovecot.pem smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = digitalhit.com virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000

    Read the article

  • Executing a command as apache

    - by Lord Loh.
    This script keeps outputting a 1. and I cannot understand why. <?php passthru("nohup sudo rndc reload sd.example.com",$op); print_r($op); ?> I have also tried the above code without the nohup. I have the following line in my sudoers file apache ALL = NOPASSWD: /usr/sbin/rndc reload sd.example.com Just to test, temporally, I allowed apache a shell, logged in as apache by sudo su apache and successfully managed to execute sudo rndc reload sd.example.com. I do not see any error message in my log files wither. What could I be possibly doing wrong? None of the similar threads have pointed me to anything that solved my problem or debug it.

    Read the article

  • Reinserted a RAID disk. Defined as foreign. Is import or clear the correct choice?

    - by Petrus
    I have re-inserted a RAID disk, on a DELL server with Windows Server 2008. The drive-status indicator was changing between a green and amber light, and the monitor gave the following message: There are offline or missing virtual drives with preserved cache. Please check the cables and ensure that all drives are present. Press any key to enter the configuration utility. I pressed a key and the PERC 6/I Integrated BIOS Configuration Utility showed that the RAID Status for that disk was Offline. After reinsertion of the disk the monitor is giving the following message: Foreign configuration(s) found on adapter. Press any key to continue or ‘C’ load the configuration utility, or press ‘F’ to import foreign configuration(s) and continue. After checking around on the net I am uncertain if I should choose import or clear. I cannot find out if an import means importing information from the array/system to the now foreign disk or the other way, i.e. importing information from the foreign disk to the array/system that was actually working fine. Also; if clear is a necessary thing to do ahead of a rebuild of that disk, or if clear means to clear the system to somehow make it ready to import the information from the foreign disk to the array/system, which is not what I want. I imagine that making the wrong choice here might be fatal. Please help clearing this out by telling what to choose and why.

    Read the article

  • Which IP addresses are using remote dekstop?

    - by Andomar
    We have a server that has an open remote desktop port to the internet (no VPN.) Several people are allowed to log on to the machine remotely. The server runs Windows 7 (desktop OS.) I can find logon times using Event Viewer, but it does not show the IP address of the remote machine. At any rate, manually browsing Event Viewer for all login events would be time consuming, to say the least.) Is a way to find out which IP addresses are using Remote Dekstop ?

    Read the article

  • Allowing ssh in iptables

    - by sat
    I am doing iptables firewall configuration. Actually, I need to allow ssh connection only from particular IP. But, It is blocking the ssh connection. I used the below commands. sat:~# iptables -F sat:~# iptables -A INPUT -p tcp -s src_ip_address -d my_ip_address --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT sat:~# iptables -A INPUT -j DROP sat:~# iptables -nL Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- src_ip_address my_ip_address tcp dpt:22 state NEW,ESTABLISHED DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination If I try to connect from src_ip_address to my_ip_address, it is blocking the connection. Even, It is blocking from my_ip_address to src_ip_address . I haven't put any rules for OUTPUT chain. What is wrong with my commands? How to allow ssh in iptables?

    Read the article

  • management network to a network port for additional ones munin and monit

    - by paolo
    management network to a network port for additional ones munin and monit I want to build a separate Netzwek for server management. I have several network cards a linux / debian / ubuntu with computer. Set both network cards sin in the /etc/network/interfaces. # The primary network interface #allow-hotplug eth0 #iface eth0 inet dhcp auto eth0 iface eth0 inet static address 10.0.0.240 netmast 255.255.255.0 network 10.0.0.0 brodacast 10.0.0.255 gateway 10.0.0.254 auto eth1 iface eth1 inet static address 10.0.10.240 netmast 255.255.255.0 network 10.0.10.0 brodacast 10.0.10.255 post-up ip route add 10.0.0.0/24 dev eth0 src 10.0.0.240 table eth0-WAN post-up ip route add default via 10.0.0.254 table eth0-WAN post-up ip route add 10.0.10.0/24 dev eth1 src 10.0.10.240 table eth1-LAN post-up ip route add default via 10.0.10.200 table eth1-LAN post-up ip rule add from 10.0.0.240 table eth0-WAN post-up ip rule add from 10.0.10.240 table eth1-LAN still i adjusted / etc/iproute2/rt_tables and following routes set up in the /etc/network/interfaces I want to have both applications and the network interface separately as munin and monit only on eth1 and not have to eth0. it goes to the reboot but sometimes not always. # Traceroute-i eth1 10.0.10.200 not go what am I doing wrong?

    Read the article

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Cisco ASA site-to-site vpn not initiating phase 1 (not sending udp 500 packets)

    - by Sean Steadman
    I am hoping someone here can help me with my problem. I am trying to setup an IPSEC site-to-site VPN between two cisco ASA 5520's in GNS3 (both using 8.4.2). I have been unsuccesful in getting the tunnel up and it appears neither ASA is sending packets out,in regards to phase 1 and phase 2 (tested by using wireshark and seeing NO udp 500 packets). Doing show ipsec sa and such shows nothing. CALIFORNIA(config)# show ipsec sa There are no ipsec sas FLA-ASA# show ipsec sa There are no ipsec sas I will attach both configurations in two different pastebin files as to keep this post a bit cleaner. Essentially California side has 172.20.1.0/24 and Florida side has 10.10.10.0/24. California ASA config: http://pastebin.com/v0pngYzF Florida ASA config: http://pastebin.com/E2geybta Please let me know if there is any other vital information that could help. I have gotten IPSEC tunnels to work using openSwan (linux) and cisco routers but cannot for the life of me get ASA IPSEC tunnels to work. The ASDM is out of the question I only use cli. Thanks for any useful help!

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >