Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 232/795 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Don't see job schedule in SQL Mgmt UI added by sp_add_jobschedule

    - by Ariel
    I'm running a script like below on a SQL Server box and, even though it finishes correctly, then when, on SQL Mgmt UI, I right click on that job's properties, go to Schedules, I cannot see the schedule just added... what am I missing? (I'm using the right job_name param, etc) thanks! BEGIN TRY BEGIN TRAN EXEC msdb.dbo.sp_add_jobschedule @job_name = 'Job name', @name=N'Job schedule name', @enabled = 0, @freq_type=1, @active_start_date=20100525, @active_start_time=60000 COMMIT TRAN END TRY BEGIN CATCH SELECT ERROR_Message(), ERROR_Line(); ROLLBACK TRAN END CATCH

    Read the article

  • mysql master-slave setup with synchronous replication

    - by imaginative
    I have a very trivial mysql master-slave setup going on between two servers. The problem is, replication is asynchronous, and this can cause issues (even on a low latency link), if the master server was to crash after a COMMIT before the replication thread from the slave was able to fetch the last bin log. Is there anyway to force mysql to do synchronous commits so that data consistency is guaranteed in a mysql-slave relationship?

    Read the article

  • CAS authentication and redirects with jQuery Ajax

    - by Steve Nay
    I've got an HTML page that needs to make requests to a CAS-protected (Central Authentication Service) web service using the jQuery AJAX functions. I've got the following code: $.ajax({ type: "GET", url: request, dataType: "json", complete: function(xmlHttp) { console.log(xmlHttp); alert(xmlHttp.status); }, success: handleRedirects }); The request variable can be either to the CAS server (https://cas.mydomain.com/login?service=myServiceURL) or directly to the service (which should then redirect back to CAS to get a service ticket). Firebug shows that the request is being made and that it comes back as a 302 redirect. However, the $.ajax() function isn't handling the redirect. I wrote this function to work around this: var handleRedirects = function(data, textStatus) { console.log(data, textStatus); if (data.redirect) { console.log("Calling a redirect: " + data.redirect); $.get(data.redirect, handleRedirects); } else { //function that handles the actual data processing gotResponse(data); } }; However, even with this, the handleRedirects function never gets called, and the xmlHttp.status always returns 0. It also doesn't look like the cookies are getting sent with the cas.mydomain.com call. (See this question for a similar problem.) Is this a problem with the AJAX calls not handling redirects, or is there more going on here than meets the eye?

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Meaning of iptables filter restriction

    - by Gnanam
    Hi, My server is Red Hat Enterprise Linux Server release 5. I'm not an expert in Linux iptables firewall. After installation, I find the following entries in /etc/sysconfig/iptables. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT What does this iptable filter restriction rule mean?

    Read the article

  • Replacing unversioned files in WiX major upgrade.

    - by Joshua
    I am still having this problem. This is the closest I have come to a solution that works, and yet it doesn't quite work. Here is (most of) the code: <Product Id='$(var.ProductCode)' UpgradeCode='$(var.UpgradeCode)' Name="Pathways" Version='$(var.ProductVersion)' Manufacturer='$(var.Manufacturer)' Language='1033'> Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" / -- -- -- There is a later version of this program installed. The problem I am having is that I need the two files in the Database component to replace the previous copies. Since these files are unversioned, I have attempted to use the CompanionFile tag set to the PathwaysExe since that is the main executable of the application, and it IS being updated, even if the log says it isn't! The strangest thing about this is that the PathwaysLdf file IS BEING UPDATED CORRECTLY, and the PathwaysMdf file IS NOT. The log seems to indicate that the "Existing file is of an equal version (Checked using version of companion)". This is very strange because that file is being replaced just fine. The only idea I have left is that this problem has to do with the install sequence, and I'm not sure how to proceed! I have the InstallExecuteSequence set like I do because of the SettingsXml file, and my need to NOT overwrite that file, which is actually working now, so whatever solution works for the database files can't break the working settings file! ;) The full log is located at: http://pastebin.com/HFiGKuKN PLEASE AND THANK YOU!

    Read the article

  • JAXB does not call setter when unmarshalling objects

    - by Yaneeve
    Hi all, I am using JAXB 2.0 JDK 6 in order to unmarshall an XML instance into POJOs. In order to add some custom validation I have inserted a validation call into the setter of a property, yet despite it being private, it seems that the unmarshaller does not call the setter but directly modifies the private field. It is crucial to me that the custom validation occurs for this specific field every unmarshall call. What should I do? Code: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "LegalParams", propOrder = { "value" }) public class LegalParams { private static final Logger LOG = Logger.getLogger(LegalParams.class); @XmlTransient private LegalParamsValidator legalParamValidator; public LegalParams() { try { WebApplicationContext webApplicationContext = ContextLoader.getCurrentWebApplicationContext(); LegalParamsFactory legalParamsFactory = (LegalParamsFactory) webApplicationContext.getBean("legalParamsFactory"); HttpSession httpSession = SessionHolder.getInstance().get(); legalParamValidator = legalParamsFactory.newLegalParamsValidator(httpSession); } catch (LegalParamsException lpe) { LOG.warn("Validator related error occurred while attempting to construct a new instance of LegalParams"); throw new IllegalStateException("LegalParams creation failure", lpe); } catch (Exception e) { LOG.warn("Spring related error occurred while attempting to construct a new instance of LegalParams"); throw new IllegalStateException("LegalParams creation failure", e); } } @XmlValue private String value; /** * Gets the value of the value property. * * @return * possible object is * {@link String } * */ public String getValue() { return value; } /** * Sets the value of the value property. * * @param value * allowed object is * {@link String } * @throws TestCaseValidationException * */ public void setValue(String value) throws TestCaseValidationException { legalParamValidator.assertValid(value); this.value = value; } }

    Read the article

  • jQuery scrollTo plugin to scroll screen to div wouldn't work, why?

    - by Michael Mao
    Hi all: I am working on a web application, whose mock-up page can now be found at: our server If you click on the blue title "step1" and then choose the option of "delivery to address", a form will show up using jQuery ajax load. No problem for this. Click on the "venue" radio button will take you to another form, no problem for this as well. If you scroll down a bit, you can see a textarea, top of that you can see a link called "what's this?". Click on it and the textarea shall be filled with sample words. The problem is, after clicking on the link, the webpage automatically scrolls to the top section. What I want is to keep the textarea to the center of screen after link is clicked. I am trying to use a jQuery plugin called "scrollTo", which can be found here From its demo page I can tell is what I want. And here is my code to try using it: function reloadDeliveryForm() { $('#deliveryForm').load('./ajax/deliveryToVenueForm.html', function(response, status, xhr) { if (status == "error") { $.prompt("Sorry but there was an error, please try again.<br />" + "If same error persists, please report to webmaster."); } else //deliveryForm loaded successfully { validateDeliveryForm(); $("#delivery_special").elastic(); $('#special_conditions_tip').click(function() { console.log("start filling"); fillTextareaWithExample(); console.log("end filling"); $.scrollTo('#delivery_special'); console.log("end scrolling"); }); } }); } From Firebug output I can tell the scrollTo function is called, but doesn't work. I've switched jQuery back to version 1.3.2, which is used on the demo page of the plugin, but that wouldn't help, either. Is there a problem with my coding? Which technique would you use to resolve this problem? Any suggestion is much appreciated.

    Read the article

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • Jquery Json dynamic variable name generation

    - by PlanetUnknown
    I make a jquery .ajax call and I'm expecting a json result. The catch is, if there are say 5 authors, I'll get author_details_0, author_details_1, author_details_2, etc.... How can I dynamically construct the name of the variable to retrieve from json ? I don't know how many authors I'll get, there could be hundreds. $.ajax({ type: "POST", url: "/authordetails/show_my_details/", data: af_pTempString, dataType: "json", beforeSend: function() { }, success: function(jsonData) { console.log("Incoming from backend : " + jsonData.toSource()); if(jsonData.AuthorCount) { console.log("Number of Authors : " + jsonData.AuthorCount); for (i = 0; i < jsonData.AuthorCount; i++) { temp = 'author_details_' + i; <-------------------This is the name of the variable I'm expecting. console.log("Farm information : " + eval(jsonData.temp) ); <----- This doesn't work, how can I get jsonData.author_details_2 for example, 'coz I don't know how many authors are there, there could be hundreds. } } Please let me know if you have any idea how to solve this ! Much appreciated.

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Check if files in a directory are still being written using Windows Batch Script

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • How do I use a custom authentication mechanism for a Java web application with Spring Security?

    - by Adam
    Hi, I'm working on a project to convert an existing Java web application to use Spring Web MVC. As a part of this I will migrate the existing log-on/log-off mechanism to use Spring Security. The idea at this stage is to replicate the existing functionality and replace only the web layer, leaving the service classes and objects in place. The required functionality is simple. Access is controlled to URLs and to access certain pages the user must log on. Authentication is performed with a simple username and password along with an extra static piece of information that comes from the login page. There is no notion of a role: once a user has logged on they have access to all of the pages. Behind the scenes, the service layer has a class with a simple authentication method: doAuthenticate(String username, String password, String info) throws ServiceException An exception is thrown if the login fails. I'd like to leave this existing service object that does the authentication intact but to "plug it into" the Spring Security mechanism. Can somebody suggest the best approach to take for this please? Naturally, I'd like to take the path of least resistance and leave the work where possible to Spring... Thanks in advance, Adam.

    Read the article

  • WCF Callbacks often break

    - by cdecker
    I'm having quite some trouble with the WCF Callback mechanism. They sometimes work but most of the time they don't. I have a really simple Interface for the callbacks to implement: public interface IClientCallback { [OperationContract] void Log(string content); } I then implmenent that interface with a class on the client: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] [ServiceContract] internal sealed class ClientCallback : IClientCallback { public void Log(String content){ Console.Write(content); } } And on the client I finally connect to the server: NetTcpBinding tcpbinding = new NetTcpBinding(SecurityMode.Transport); EndpointAddress endpoint = new EndpointAddress("net.tcp://127.0.0.1:1337"); ClientCallback callback= new ClientCallback(); DuplexChannelFactory<IServer> factory = new DuplexChannelFactory<IServer>(callback,tcpbinding, endpoint); factory.Open(); _connection = factory.CreateChannel(); ((ICommunicationObject)_connection).Faulted += new EventHandler(RecreateChannel); try { ((ICommunicationObject)_connection).Open(); } catch (CommunicationException ce) { Console.Write(ce.ToString()); } To invoke the callback I use the following: OperationContext.Current.GetCallbackChannel().Log("Hello World!"); But it just hangs there, and after a while the client complains about timeouts. Is there a simple solution as to why?

    Read the article

  • SQL Server Blocking Issue

    - by Robin Weston
    We currently have an issue that occurs roughly once a day on SQL 2005 database server, although the time it happens is not consistent. Basically, the database grinds to a halt, and starts refusing connections with the following error message. This includes logging into SSMS: A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Our CPU usage for SQL is usually around 15%, but when the DB is in it's broken state it's around 70%, so it's clearly doing something, even if no-one can connect. Even if I disable the web app that uses the database the CPU still doesn't go down. I am unable to restart the SQLSERVER process as it is unresponsive, so I have to end up killing the process manually, which then puts the DB into Suspect/Recovery mode (which I can fix but it's a pain). Below are some PerfMon stats I gathered when the DB was in it's broken state which might help. I have a bunch more if people want to request them: Active Transactions: 2 (Never Changes) Logical Connections: 34 (NC) Process Blocked: 16 (NC) User Connections: 30 (NC) Batch Request: 0 (NC) Active Jobs: 2 (NC) Log Truncations: 596 (NC) Log Shrinks: 24 (NC) Longest Running Transaction Time: 99 (NC) I guess they key is finding out what the DB is using it's CPU on, but as I can't even log into SSMS this isn't possible with the standard methods. Disturbingly, I can't even use the dedicated admin connection to get into SSMS. I get the same timout as with all other requests. Any advice, reccomendations, or even sympathy, is much appreciated!

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Why is tortoise-git changing my file permissions?

    - by Erik Vold
    I switch between using tortoise git and cmd line git on cygwin very frequently, and lately I've noticed that when I git status via cygwin and no changes are found, then I go to use tortoise git, and right click on a repo then use the "Git Commit - ..." menu item, I get a list of files that have supposedly changed, but of course when review the diff there are no changes to the file contents, it's actually the file permissions which appear to be changed, which git via cygwin does not recognize. So what is wrong with my tortoise git setup?

    Read the article

  • Windows: Making Windows Explorer distinctive (changing background color or file/folder icons) for specific folder

    - by MacGyver
    Is there a way to change the background color in Windows 7 and Windows Server 2008 when the current folder being displayed meets a certain condition? Or is there a way I can change the icons of the files and folders within that folder so it's distinctive--similar to how Tortoise SVN does it for code checked out from a repository? Why? I'd like to do this for a deployment directory on a live server so users don't accidentally commit code to a certain environment. Like myself. :-)

    Read the article

  • weird problem with load () or live () !!

    - by silversky
    I load a page with load () and then I create dinamically a tag. Then I use live() to bind a click event and fires a function. At the end a call unload (). The problem is that when I load the same page again ( without refresh ) when on click the function will be fired twice. If I exit again (again with unload ()) and load the page again on click will fire 3 times and so on .... A sample of my code is: $('#tab').click(function() { $('#formWrap').load('newPage.php'); }); $('div').after('<p class="ctr" ></p>'); $('p.ctr').live('click', function(e) { if($(e.target).is('[k=lf]')) { console.log ('one'); delete ($this); } else if .... }); function delete () { $.post( 'update.php', data); } I have other $.post inside on this page and also on the above live fnc and all work well. The above one also works but like I said on the second load will fire twice and the 3 times and so on ... The weird part for me is that if replece the console with console.log ('two'); save the page and load the page without refresh it will fire on a different rows - one two - if I unload the page replace the console with console.log ('three'); and load again will fire one two and three. I try to use: $.ajax({ url: 'updateDB.php', data: data, type: 'POST', cache:false }); $.ajaxSetup ({ cache: false }); header("Cache-Control: no-cache"); none of this it's working. And I have this problem only on this fnc. What do you think, it could be the reason, it remembers it remembers the previous action and it fires again?

    Read the article

  • How to generate random numbers of lognormal distribution within specific range in Matlab

    - by Harpreet
    My grain sizes are defined as D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]. The problem is described below in steps. Grain sizes should follow lognormal distribution. The mean of the grain sizes is fixed as 0.84 and the standard deviation should be as low as possible but not zero. 90% of the grains (by weight %) fall in the size range of 1.19 to 0.59, and the rest 10% fall in size range of 0.50 to 0.42. Now I want to find the probabilities (weight percentage) of the grains falling in each grain size. It is allowable to split this grain size distribution into further small sizes but it must always be in the range of 1.19 and 0.42, i.e. 'D' can be continuous but 0.42 < D < 1.19. I need it fast. I tried on my own but I am not able to get the correct result. I am getting negative probabilities (weight percentages). Thanks to anyone who helps. I didn't incorporate the point 3 as I came to know about that condition later. Here are simple steps I tried: %% D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]; s=0.30; % std dev of the lognormal distribution m=0.84; % mean of the lognormal distribution mu=log(m^2/sqrt(s^2+m^2)); % mean of the associated normal dist. sigma=sqrt(log((s^2/m^2)+1)); % std dev of the associated normal dist. [r,c]=size(D); for i=1:c D(i)=mu+(sigma.*randn(1)); w(i)=(log(D(i))-mu)/sigma; % the probability or the wt. percentage of the grain sizes end grain_size=exp(D); %%

    Read the article

  • How to more effectively debug PHP code in the vein JavaScript with Firebug or XCode?

    - by racl101
    Hi everyone, I'm kind of a newbie with PHP so please bear with me. I would like to know if there was an effective way of debugging PHP code so that I don't have to have debugging messages display on the browser. For example, I find var_dump and print_r functions to be excellent for debugging variables, function calls and arrays respectively. The problem is I have been asked to debug code on a live site (no dev site, I know it is a horrible practice but this is not my project from the start.) So I would like to know what core function or php library or whatever else it is that I could use to log debugging calls in to log that I can check so I don't have to send debugging calls to the browser on a live site? I like the way you can use the console.log function in JavaScript code and check it in Firebug or the Webkit Console and I also like like the console window in Xcode and I was wondering if there was some similar tool for PHP debugging. Any extra info and pearls of wisdom would be greatly appreciated. Thanks, racl101.

    Read the article

  • Java split giving opposite order of arabic characters

    - by MuhammadA
    I am splitting the following string using \\| in java (android) using the IntelliJ 12 IDE. Everything is fine except the last part, somehow the split picks them up in the opposite order : As you can see the real positioning 34,35,36 is correct and according to the string, but when it gets picked out into split part no 5 its in the wrong order, 36,35,34 ... Any way I can get them to be in the right order? My Code: public ArrayList<Book> getBooksFromDatFile(Context context, String fileName) { ArrayList<Book> books = new ArrayList<Book>(); try { // load csv from assets InputStream is = context.getAssets().open(fileName); try { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); String line; while ((line = reader.readLine()) != null) { String[] RowData = line.split("\\|"); books.add(new Book(RowData[0], RowData[1], RowData[2], RowData[3], RowData[4], RowData[5])); } } catch (IOException ex) { Log.e(TAG, "Error parsing csv file!"); } finally { try { is.close(); } catch (IOException e) { Log.e(TAG, "Error closing input stream!"); } } } catch (IOException ex) { Log.e(TAG, "Error reading .dat file from assets!"); } return books; }

    Read the article

  • debian gateway using iptables

    - by meijuh
    I am having problems setting up a debian gateway server. My goal: Having eth1 the WAN interface. Having eth0 the LAN interface. Allow both ports 22 (SSH) and 80 (HTTP) accessed from the outside world on the gateway (SSH and HTTP run on this server). What I did was the following: Create a file /etc/iptables.rules with contents: /etc/iptables.rules: *nat -A POSTROUTING -o eth1 -j MASQUERADE COMMIT *filter -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth1 -j DROP COMMIT edit /etc/network/interfaces as follows: /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.rules auto eth0 allow-hotplug eth0 iface eth0 inet dhcp #auto eth1 #allow-hotplug eth1 #iface eth1 inet dhcp allow-hotplug eth1 iface eth1 inet static address 217.119.224.51 netmask 255.255.255.248 gateway 217.119.224.49 dns-nameservers 217.119.226.67 217.119.226.68 Uncomment the rule net.ipv4.ip_forward=1 in /etc/sysctl.conf to allow packet forwarding. The static settings for eth1 such as the ip address I got from my router (which I want to replace); I simply copied these. I have a (windows) DNS + DHCP server on ip address 10.180.1.10, which assigns ip address 10.180.1.44 to eth0. What this server does is not really interesting it only maps domain names on our local network and assigns one static ip to the gateway. What works: on the gateway itself I can ping 8.8.8.8 and google.nl. So that is okey. What does not work: (1) Every machine connected to eth0 (indirectly via a switch) can not ping an ip or a domain. So I guess the gateway can not be found. (2) Also when I configure my linux machine (a laptop) to use a static ip 10.180.1.41, a mask and a gateway (10.180.1.44) I can not ping an ip or domain either. This means that maybe my iptables is incorrect of not loaded correctly. Or I maybe have to configure my DNS/DHCP on my windows machine. I have not reset the windows machine net, restart the DNS/DHCP services, should I do this? I did not install dnsmasq as desribed here: http://blog.noviantech.com/2010/12/22/debian-router-gateway-in-15-minutes/. I don't think this is necessary?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • Ubuntu shutdown hook?

    - by ???
    I want to check 10+ local Git repositories if they have any unpushed commit, before shutdown. (I always forgot to push them, so later, the next morning I came office and back home again) I think maybe the shutdown process can check some conditions to meet, if any condition is not met, then give the user the choice to continue to shutdown or just cancel. Then, I can write something to hook the shutdown to check my Git repository to push.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >