Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 703/3203 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • No such file to load, Model/Lib naming conflict?

    - by Tom
    I'm working on a Rails application. I have a Module called Animals. Inside this Module is a Class with the same name as one of my Models (Dog). show_animal action: def show_animal require 'Animals/Bear.rb' #Works require 'Animals/Dog.rb' #Fails end So the first require definitely works, the seconds fails. MissingSourceFile (no such file to load -- Animals/Dog.rb): I noticed that Dog.rb is the same file name as one of my models, is that what's causing this? I'm using Webrick.

    Read the article

  • Silverlight client load operation failed for query "Login". [GenericParameterNotValid]

    - by cvoeller
    I have one user who gets the following error when trying to login, "Silverlight client load operation failed for query "Login". [GenericParameterNotValid]". The odd thing is that other users are able to login without issue and I can login using the "problem" account from other machines. At this point I think it's got to be a client side configuration issue. My next step is to confirm the Client Side Silverlight Version, but I don't know where to go after that. Do you have any suggestions?

    Read the article

  • Whats the best way of using MVC + ajax (jquery) to load page content, aspx or ascx or both

    - by devzero
    I want to have a menu that when I click replaces the content of a "main" div with content from a mvc view. This works just fine if I use a .aspx page, but any master.page content is then doubled (like the and any css/js). If I do the same but uses a .ascx user control the content is loaded without the extras, but if any browser loads the menu item directly (ie search bot's or someone with JS disabled), the page is displayed without the master.page content. The best solution I've found so far is to create the content as a .ascx page, then have a .aspx page load this if it's called directly from the menu link, while the ajax javascript would modify the link to use only the .ascx. This leads to a lot duplication though, as every user control needs it's own .aspx page. I was wondering if there is any better way of doing this? Could for example the master.page hide everything that's not from the .aspx page if it was called with parameter ?ajax=true?

    Read the article

  • Is vim able to detect the natural language of a file, then load the correct dictionary ?

    - by Niels
    I am using several languages, and currently I am obliged to indicate to vim with which of these the spell check must be done. Is there a way to set up vim so that it automatically detects the correct one? I vaguely remember that in a previous version of vim, when the spell check was not integrated, the vimspell script made this possible. It would be even better if this could apply not only to a file but also to a portion of a file, since I frequently mix several languages in a single file. Of course, I would like to avoid to load several dictionaries simultaneously.

    Read the article

  • Can I load lots of data only once and use it on each request?

    - by eikes
    Is there a way to load a big data object into the memory, which usually has to be loaded at each request, only once? In Java you can instantiate an object in a servlet when this servlet is loaded, but once it's there you can use it in any request. Example is below. Can this be done in PHP? public class SampleServlet extends HttpServlet { private static HugeGraphObject hgo; public void init() { hgo = HugeGraphObjectFactory.get(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String param = request.getParameter("q"); response.getWriter().write(hgo.getSomeThing(param)); } }

    Read the article

  • Silverlight performance with a large number of Objects Org Chart.

    - by KC
    Hello, I’m working on a org chart project(SL 3) and I’m seeing the UI thread hang when the chart is building around 2,000 nodes and when it renders it takes about a min and then FPS drop to a crawl. Here is the code flow. Page.xaml.cs calls a wcf service that returns a list of AD users. Then we use Linq to build a collection of nodes to bind to the Orgchart.cs OrgChart.cs is a canvas and displays a collection of nodes and connecting lines. Node.cs is a canvas and has user data can contain children nodes. NodeContent.xaml is a user control that has borders so I can set the background, textblocks to display user's data, Events that handle the selected and expaned nodes, and storyboards that resize the nodes when they are selected or expanded.I noticed during hours of debugging , here in the InitializeComponent(); section where it loads the xaml it seems to be where the preformance hit is happening. System.Windows.Application.LoadComponent(this, new System.Uri("/Silverlight.Custom;component/NodeContent.xaml", System.UriKind.Relative)); So I guess I have two questions. Can threading help in any way with the UI thread hanging while drawing the nodes? How can I avoid the hit when calling this user control? Any advice or direction anyone can lend would be greatly appreciated. Thanks, KC

    Read the article

  • Why can't perfmon see instances of my custom performance counter?

    - by spoulson
    I'm creating some custom performance counters for an application. I wrote a simple C# tool to create the categories and counters. For example, the code snippet below is basically what I'm running. Then, I run a separate app that endlessly refreshes the raw value of the counter. While that runs, the counter and dummy instance are seen locally in perfmon. The problem I'm having is that the monitoring system we use can't see the instances in the multi-instance counter I've created when viewing remotely from another server. When using perfmon to browse the counters, I can see the category and counters, but the instances box is grayed out and I can't even select "All instances", nor can I click "Add". Using other access methods, like [typeperf][1] exhibit similar issues. I'm not sure if this is a server or code issue. This is only reproducible in the production environment where I need it. On my desktop and development servers, it works great. I'm a local admin on all servers. CounterCreationDataCollection collection = new CounterCreationDataCollection(); var category_name = "My Application"; var counter_name = "My counter name"; CounterCreationData ccd = new CounterCreationData(); ccd.CounterType = PerformanceCounterType.RateOfCountsPerSecond64; ccd.CounterName = counter_name; ccd.CounterHelp = counter_name; collection.Add(ccd); PerformanceCounterCategory.Create(category_name, category_name, PerformanceCounterCategoryType.MultiInstance, collection); Then, in a separate app, I run this to generate dummy instance data: var pc = new PerformanceCounter(category_name, counter_name, instance_name, false); while (true) { pc.RawValue = 0; Thread.Sleep(1000); }

    Read the article

  • Not possible to load DropDownList on FormView from code behind??

    - by tbone
    I have a UserControl, containing a FormView, containing a DropDownList. The FormView is bound to a data control. Like so: <asp:FormView ID="frmEdit" DataKeyNames="MetricCode" runat="server" DefaultMode="Edit" DataSourceID="llbDataSource" Cellpadding="0" > <EditItemTemplate> <asp:DropDownList ID="ParentMetricCode" runat="server" SelectedValue='<%# Bind("ParentMetricCode") %>' /> etc, etc I am trying to populate the DropDownList from the codebehind. If this was not contained in a FormView, I would normally just do it in the Page_Load event. However, that does not work within a FormView, as as soon as I try to do it, accessing the dropdownlist in code, ie: theListcontrol = CType(formView.FindControl(listControlName), System.Web.UI.WebControls.ListControl) ...the data binding mechansim of the FormView is invoked, which, of course, tries to bind the DropDownList to the underlying datasource, causing a *'ParentMetricCode' has a SelectedValue which is invalid because it does not exist in the list of items. Parameter name: value * error, since the DropDownList has not yet been populated. I tried performing the load in the DataBinding() event of the FormView, but then: theListcontrol = CType(formView.FindControl(listControlName), System.Web.UI.WebControls.ListControl) ...fails, as the FormView.Controls.Count = 0 at that point. Is this impossible? (I do not want to have to use a secondary ObjectDataSource to bind the dropdownlist to)

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Memory allocation in detached NSThread to load an NSDictionary in background?

    - by mobibob
    I am trying to launch a background thread to retrieve XML data from a web service. I developed it synchronously - without threads, so I know that part works. Now I am ready to have a non-blocking service by spawning a thread to wait for the response and parse. I created an NSAutoreleasePool inside the thread and release it at the end of the parsing. The code to spawn and the thread are as follows: Spawn from main-loop code: . . [NSThread detachNewThreadSelector:@selector(spawnRequestThread:) toTarget:self withObject:url]; . . Thread (inside 'self'): -(void) spawnRequestThread: (NSURL*) url { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [self parseContentsOfResponse]; [parser release]; [pool release]; } The method parseContentsOfResponse fills an NSMutableDictionary with the parsed document contents. I would like to avoid moving the data around a lot and allocate it back in the main-loop that spawned the thread rather than making a copy. First, is that possible, and if not, can I simply pass in an allocated pointer from the main thread and allocate with 'dictionaryWithDictionary' method? That just seems so inefficient. Are there perferred designs?

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • Why the performance of following code is degrading when I use threads ?

    - by DotNetBeginner
    Why the performance of following code is degrading when I use threads ? **1.Without threads int[] arr = new int[100000000]; //Array elements - [0][1][2][3]---[100000000-1] addWithOutThreading(arr); // Time required for this operation - 1.16 sec Definition for addWithOutThreading public void addWithOutThreading(int[] arr) { UInt64 result = 0; for (int i = 0; i < 100000000; i++) { result = result + Convert.ToUInt64(arr[i]); } Console.WriteLine("Addition = " + result.ToString()); } **2.With threads int[] arr = new int[100000000]; int part = (100000000 / 4); UInt64 res1 = 0, res2 = 0, res3 = 0, res4 = 0; ThreadStart starter1 = delegate { addWithThreading(arr, 0, part, ref res1); }; ThreadStart starter2 = delegate { addWithThreading(arr, part, part * 2, ref res2); }; ThreadStart starter3 = delegate { addWithThreading(arr, part * 2, part * 3, ref res3); }; ThreadStart starter4 = delegate { addWithThreading(arr, part * 3, part * 4, ref res4); }; Thread t1 = new Thread(starter1); Thread t2 = new Thread(starter2); Thread t3 = new Thread(starter3); Thread t4 = new Thread(starter4); t1.Start(); t2.Start(); t3.Start(); t4.Start(); t1.Join(); t2.Join(); t3.Join(); t4.Join(); Console.WriteLine("Addition = "+(res1+res2+res3+res4).ToString()); // Time required for this operation - 1.30 sec Definition for addWithThreading public void addWithThreading(int[] arr,int startIndex, int endIndex,ref UInt64 result) { for (int i = startIndex; i < endIndex; i++) { result = result + Convert.ToUInt64(arr[i]); } }

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • What are the javascript performance tradeoffs in adding id's to dom elements?

    - by Blinky
    For example, I have a website which has a dynamically-generated segment of its dom which looks roughly like this: <div id="master"> <div class="child"> ... </div> <div class="child"> ... </div> <div class="child"> ... </div> ... <div class="child"> ... </div> </div> There might be hundreds of child elements in this manner, and they, too, might have yet more children. At present, none of them have id's except for the master. Javascript manipulates them via dom-walking. It's a little scary. I'd like to add id's, e.g.: <div id="master"> <div id="child1" class="child"> ... </div> <div id="child2" class="child"> ... </div> <div id="child3" class="child"> ... </div> ... <div id="childN" class="child"> ... </div> </div> What are the tradeoffs in adding id's to these divs? Certainly, jQuery must maintain a hash of id's, so there's going to be some sort of performance hit. Any general guidelines in when adding additional id's is a bad idea? Is jQuery so awesome that, in practice, it doesn't matter?

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • Entities used to serialize data have changed. How can the serialized data be upgraded for the new entities?

    - by i8abug
    Hi, I have a bunch of simple entity instances that I have serialized to a file. In the future, I know that the structure of these entities (ie, maybe I will rename Name to Header or something). The thing is, I don't want to lose the data that I have saved in all these old files. What is the proper way to either load the data from the old entities into new entities upgrade the old files so that they can be used with new entities Note: I think I am stuck with binary serialization, not xml serialization. Thanks in advance! Edit: So I have an answer for the case I have described. I can use a dataContractSerializer and do something like [DataMember("bar")] private string foo; and change the name in the code and keep the same name that was used for serialization. But what about the following additional cases: The original entity has new members which can be serialized Some serialized members that were in the original entity are removed Some members have actually changed in function (suppose that the original class had a FirstName and LastName member and it has been refactored to have only a FullName member which combines the two) To handle these, I need some sort of interpreter/translator deserialization class but I have no idea what I should use

    Read the article

  • byte and short data types in Java can accept the value outside the range by explicit cast. The higher data types however can not. Why?

    - by Lion
    Let's consider the following expressions in Java. byte a = 32; byte b = (byte) 250; int i = a + b; This is valid in Java even though the expression byte b = (byte) 250; is forced to assign the value 250 to b which is outside the range of the type byte. Therefore, b is assigned -6 and consequently i is assigned the value 26 through the statement int i = a + b;. The same thing is possible with short as follows. short s1=(short) 567889999; Although the specified value is outside the range of short, this statement is legal. The same thing is however wrong with higher data types such int, double, folat etc and hence, the following case is invalid and causes a compile-time error. int z=2147483648; This is illegal, since the range of int in Java is from -2,147,483,648 to 2147483647 which the above statement exceeds and issues a compile-time error. Why is such not wrong with byte and short data types in Java?

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • How can I set my bootloader to load my primary (C:) partition?

    - by acidzombie24
    I created 4 partitions and want to use them to have seperate Windows XP, Windows 7, (possibly) Windows Vista installations, and "WinDummy" (to test applications in Vista, XP or another OS). I used Norton Ghost to install an OS to the drive in about 3 minutes. My problem is that I installed the spare first on the 4th partition, then Windows 7 on the second. I tried to set the bootloader (with easybcd) to use the first partition - but it doesn't want to. Heres my debug screen on easybcd As you can see, the device is set to H and i cant figure out how to change it. I can make my bootloader use Windows 7 first, but I can't make it use my C: install of XP instead of my spare H:. How would I fix this? Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=H: description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {bc2d8409-8640-11de-aa7e-a477d86453c4} resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} displayorder {bc2d8409-8640-11de-aa7e-a477d86453c4} {bc2d8406-8640-11de-aa7e-a477d86453c4} {bc2d8404-8640-11de-aa7e-a477d86453c4} {466f5a88-0af2-4f76-9038-095b170dc21c} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 3 Real-mode Boot Sector --------------------- identifier {bc2d8409-8640-11de-aa7e-a477d86453c4} device partition=C: path \NTLDR description Windows XP Windows Boot Loader ------------------- identifier {bc2d8406-8640-11de-aa7e-a477d86453c4} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {bc2d8407-8640-11de-aa7e-a477d86453c4} recoveryenabled Yes osdevice partition=D: systemroot \Windows resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} nx OptIn Windows Boot Loader ------------------- identifier {bc2d8404-8640-11de-aa7e-a477d86453c4} device partition=E: path \Windows\system32\winload.exe description Blank osdevice partition=E: systemroot \Windows Windows Legacy OS Loader ------------------------ identifier {466f5a88-0af2-4f76-9038-095b170dc21c} device partition=H: path \ntldr description Windows XP Spare

    Read the article

  • How to load the environment variables at boot time before X11 on Ubuntu Precise?

    - by Fnux
    Using Ubuntu Precise 64 bit, I'm facing a problem that I'm unable to solve and that I'll try to describe below: I'm using a console mode program (let's say abc) that uses Go, NodeJS, Java and Scala. In order for abc to work with these languages, I've to declare the following statements: a) within /etc/environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar b) within /etc/login.defs ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin c) a) within /etc/sudoers: `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"` Then, when I start abc from a terminal, all is fine and I can use any of the 4 languages described above. However, if I put a script within /etc/init.d that starts abc during the boot process (i.e. before to start the GUI), using Java from abc still is fine, but using Go, NodeJS or Scala doesn't work anymore. Then, I guess that during the boot process, the script within /etc/init.d that starts abc is executed before that the different environment variables set within /etc/sudoers, /etc/environment and /etc/login.defs are loaded. So, my question is: how to force the environment variables to be loaded before that my script starting abc is launched? Any help and advice on this topic would be trully appreciated. TIA. Cheers. Thanks again to Mark and Danila. Below is the current "abc" script file that I put within /etc/init.d `#! /bin/sh ### EDIT: ADD THIS VARS DEFINITIONS: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar "ENV_SUPATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "ENV_PATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "Defaults secure_path"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ##### EXPORT this VARS so they are accessible to children:" export "PATH" "CLASSPATH" "ENV_SUPATH PATH" "ENV_PATH PATH" "Defaults secure_path" `### BEGIN INIT INFO `# Provides: abc `# Required-Start: $remote_fs $syslog `# Required-Stop: $remote_fs $syslog `# Default-Start: 2 3 4 5 `# Default-Stop: 0 1 6 `# Short-Description: abc initscript `# Description: This iniscript starts and stops abc `### END INIT INFO `# Author: Fnux, fnux.fl at gmail dot com `# Version: 1.2 `# Note: (edit ABC_PATH if abc isn't installed in /opt/abc) NAME=abc ABC_PATH=/opt/abc START="-d" STOP="-k" VERSION="-v" SCRIPTNAME=/etc/init.d/$NAME STARTMESG="\nStarting abc in deamon mode." UPMESG="\n$NAME is running." DOWNMESG="\n$NAME is not running." STATUS=`pidof $NAME` `# Exit if abc is not installed [ -x "$ABC_PATH/$NAME" ] || exit 0 case "$1" in start) echo $STARTMESG cd $ABC_PATH ./$NAME $START ;; stop) cd $ABC_PATH ./$NAME $STOP ;; status) if [ "$STATUS" > 0 ] ; then echo $UPMESG else echo $DOWNMESG fi ;; restart) cd $ABC_PATH ./$NAME $STOP echo $STARTMESG ./$NAME $START ;; version) cd $ABC_PATH ./$NAME $VERSION ;; *) echo "Usage: $SCRIPTNAME {start|status|restart|stop|version}" >&2 exit 3 ;; esac : So, where and how should I write the needed environment variables for: a) Go needs the following statements (ie: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin") b) and Scala needs this one: (ie CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar). TIA for an explanation how to do so. Cheers.

    Read the article

  • How to allow users to monitor performance of a set of servers without touching every server?

    - by Jon Seigel
    I'm not a sysadmin, so this may be trivial. We have about 20 Windows Server 2008 R2 VMs we want to monitor centrally using Perfmon. The only issue is that the user account that's going to be doing the monitoring is not (and I assume will never be) in the Administrators group. The servers, and the user account (currently one, but could be more) are all on the same domain. Right now we're running a pilot with 5 of the servers, touching each VM manually to set the permissions, which is already getting cumbersome to manage. If we decide to roll this out to all the servers, we need a scalable solution to control access. What is the most flexible way to accomplish this? I'd like a solution that would work with 200 servers just as easily as the 20 servers we have now.

    Read the article

  • Windows Server 2003 with Apache and IIS causing random faulting and performance issues with Apache?

    - by contrebis
    I'm trying to fix a problem on a Windows Server 2003 SE install which is running IIS6 and Apache webserver (with PHP and MySQL). IIS sites are bound to one IP, Apache to the other. Everything seemed fine till the other IP address was installed to allow a webservice to run under IIS. Symptoms: Apache now responds very slowly, even requests for static files (often 30 seconds or more) Sporadic errors are appearing in the event logs like: Faulting application httpd.exe, version 2.2.14.0, faulting module php5ts.dll, version 5.2.13.13, fault address 0x000ac14f. I've double-checked the config files, taken account of this question/answer http://serverfault.com/questions/51230/running-iis-and-apache-on-the-same-windows-server, upped the Apache log level to debug, run TCPView to check for conflicting bindings, upgraded to latest Apache/PHP versions but still no success or indication of a cause. Any suggestions on where to look, or debugging tips would be gratefully received. I'm a web programmer so not so familiar with Windows Server admin or details of the networking stack. Running PHP under IIS is not an option and hosting on another server is non-ideal.

    Read the article

  • wondering how much data bandwidth can a 3G gsm cell tower can support?

    - by Karim
    i was always wondering how many simaltinues users can a 3G tower supports with its data rates? i mean they advertize 28.8Mb/Sec for the 3G Data but in reallity if a lot of people use it say 10 , it wont give 288Mb/Sec bandwidth. i didnt find anywhere where such information is published so i thought to ask here. dont know why the cell operators keep it such a secret :)

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >