Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 483/1205 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • How full is too full for mechanical hard drives?

    - by Sunny Molini
    I have heard many claim that it doesn't matter how full a drive is until it starts cutting into temp and virtual memory space. This doesn't make sense to me, given the nature of how the data is transacted on a hard drive. The inside of the platter presents less data per revolution than the outside of the drive does, by significant factors. The inside 40% of the radius of full size hard drive is used for the spindle, so only the outside 60% is used for data storage, but that still means that the inside track of a hard drive presents data 60% slower than the outside track. By my calculation, a Hard drive that is only 10% full should perform about 2.25 times faster than a hard drive that is 90% full, assuming that the flow is constrained by other factors. Am I wildly off base here? For all the drives I know, even the top speeds of the first 1% of the drive would be well within the bandwidth provided by a SATA 2 connection.

    Read the article

  • What's Wrong With My VB.NET Code Of Windows Forms Application?

    - by Krishanu Dey
    I've to forms frmPrint & frmEmail and a dataset(MyDataset) with some DataTable and DataTable Adapters. In frmPrint I've the following Sub Public Sub StartPrinting() try adapterLettersInSchedules.Fill(ds.LettersInSchedules) adapterLetters.Fill(ds.Letters) adapterClients.Fill(ds.Clients) adapterPrintJobs.GetPrintJobsDueToday(ds.PrintJobs, False, DateTime.Today) For Each prow As MyDataSet.PrintJobsRow In ds.PrintJobs Dim lisrow As MyDataSet.LettersInSchedulesRow = ds.LettersInSchedules.FindByID(prow.LetterInScheduleID) If lisrow.Isemail = False Then Dim clientrow As MyDataSet.ClientsRow = ds.Clients.FindByClientID(prow.ClientID) Dim letterrow As MyDataSet.LettersRow = ds.Letters.FindByID(lisrow.LetterID) 'prow. 'lisrow.is Label1.SuspendLayout() Label1.Refresh() Label1.Text = "Printing letter" txt.Rtf = letterrow.LetterContents txt.Rtf = txt.Rtf.Replace("<%Firstname%>", clientrow.FirstName) txt.Rtf = txt.Rtf.Replace("<%Lastname%>", clientrow.LastName) txt.Rtf = txt.Rtf.Replace("<%Title%>", clientrow.Title) txt.Rtf = txt.Rtf.Replace("<%Street%>", clientrow.Street) txt.Rtf = txt.Rtf.Replace("<%City%>", clientrow.City) txt.Rtf = txt.Rtf.Replace("<%State%>", clientrow.State) txt.Rtf = txt.Rtf.Replace("<%Zip%>", clientrow.Zip) txt.Rtf = txt.Rtf.Replace("<%PhoneH%>", clientrow.PhoneH) txt.Rtf = txt.Rtf.Replace("<%PhoneW%>", clientrow.PhoneW) txt.Rtf = txt.Rtf.Replace("<%Date%>", DateTime.Today.ToShortDateString) Try PDoc.PrinterSettings = printDlg.PrinterSettings PDoc.Print() prow.Printed = True adapterPrintJobs.Update(prow) Catch ex As Exception End Try End If Next prow ds.PrintJobs.Clear() Catch ex As Exception MessageBox.Show(ex.Message, "Print", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try End Sub And in frmEmail i've the Following Sub Public Sub SendEmails() try adapterLettersInSchedules.Fill(ds.LettersInSchedules) adapterLetters.Fill(ds.Letters) adapterClients.Fill(ds.Clients) adapterEmailJobs.GetEmailJobsDueToday(ds.EmailJobs, False, Today) Dim ls_string As String For Each prow As MyDataSet.EmailJobsRow In ds.EmailJobs Dim lisrow As MyDataSet.LettersInSchedulesRow = ds.LettersInSchedules.FindByID(prow.LetterInScheduleID) If lisrow.Isemail = True Then Dim clientrow As MyDataSet.ClientsRow = ds.Clients.FindByClientID(prow.ClientID) Dim letterrow As MyDataSet.LettersRow = ds.Letters.FindByID(lisrow.LetterID) txt.Rtf = letterrow.LetterContents ls_string = RTF2HTML(txt.Rtf) ls_string = Mid(ls_string, 1, Len(ls_string) - 176) If ls_string = "" Then Throw New Exception("Rtf To HTML Conversion Failed") Label1.SuspendLayout() Label1.Refresh() Label1.Text = "Sending Email" If SendEmail(clientrow.Email, ls_string, letterrow.EmailSubject) Then Try prow.Emailed = True adapterEmailJobs.Update(prow) Catch ex As Exception End Try Else prow.Emailed = False adapterEmailJobs.Update(prow) End If End If Next prow Catch ex As Exception MessageBox.Show(ex.Message, "Email", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try End Sub I'm running this two subs using two different Threads. Public th As New Thread(New ThreadStart(AddressOf StartFirstPrint)) Public th4 As New Thread(New ThreadStart(AddressOf sendFirstEmail)) Here is the code of StartFirstPrint and sendFirstEmail Public Sub StartFirstPrint() Do While thCont Try Dim frm As New frmPrint() 'frm.MdiParent = Me frm.StartPrinting() Catch ex As Exception End Try Loop End Sub Public Sub sendFirstEmail() Do While thCont Try Dim frmSNDEmail As New frmEmail frmSNDEmail.SendEmails() Catch ex As Exception End Try Loop End Sub the thCont is a public boolean variable that specifies when to shop those threads. Most Of the time this works very well. But some times it gives errors Like the following image I don't know why is this occurring. Please help me.

    Read the article

  • Issue about Exchange 07 SP2 Backup in SBS 08

    - by Bastien974
    Hi, I'm trying to backup my Exchange 07 SP2 with the Windows Server Backup. Since it's supposed to make a exchange-aware backup with the SP2, I created a scheduled full backup of the C: (where is located my First Storage Group). The backup is successful, but when I go in Mailbox database's properties, I see that the last full backup is 2 months ago (a that time backup worked but we had some issue then). In Server Manager, Features, I checked that I have Windows Server Backup Features checked. What am I missing ? Thank you !

    Read the article

  • Detecting type of webserive in httpmodule

    - by Marcus
    Hi, Is there any way to detect the type of a webservice inside a httpmodule? The reason for this is that I want to do some property injection to the webservice before it's processed. I found this: http://social.msdn.microsoft.com/Forums/en/asmxandxml/thread/0e848eee-d353-4e67-b47f-89fddb600009 but that is one h..l of an ugly solution. Anyone have a nice solution?

    Read the article

  • Is there something like clock() that works better for parallel code?

    - by Jared P
    So I know that clock() measures clock cycles, and thus isn't very good for measuring time, and I know there are functions like omp_get_wtime() for getting the wall time, but it is frustrating for me that the wall time varies so much, and was wondering if there was some way to measure distinct clock cycles (only one cycle even if more than one thread executed in it). It has to be something relatively simple/native. Thanks

    Read the article

  • Several client waiting for the same event

    - by ff8mania
    I'm developing a communication API to be used by a lot of generic clients to communicate with a proprietary system. This proprietary system exposes an API, and I use a particular classes to send and wait messages from this system: obviously the system alert me that a message is ready using an event. The event is named OnMessageArrived. My idea is to expose a simple SendSyncMessage(message) method that helps the user/client to simply send a message and the method returns the response. The client: using ( Communicator c = new Communicator() ) { response = c.SendSync(message); } The communicator class is done in this way: public class Communicator : IDisposable { // Proprietary system object ExternalSystem c; String currentRespone; Guid currentGUID; private readonly ManualResetEvent _manualResetEvent; private ManualResetEvent _manualResetEvent2; String systemName = "system"; String ServerName = "server"; public Communicator() { _manualResetEvent = new ManualResetEvent(false); //This methods are from the proprietary system API c = SystemInstance.CreateInstance(); c.Connect(systemName , ServerName); } private void ConnectionStarter( object data ) { c.OnMessageArrivedEvent += c_OnMessageArrivedEvent; _manualResetEvent.WaitOne(); c.OnMessageArrivedEvent-= c_OnMessageArrivedEvent; } public String SendSync( String Message ) { Thread _internalThread = new Thread(ConnectionStarter); _internalThread.Start(c); _manualResetEvent2 = new ManualResetEvent(false); String toRet; int messageID; currentGUID = Guid.NewGuid(); c.SendMessage(Message, "Request", currentGUID.ToString()); _manualResetEvent2.WaitOne(); toRet = currentRespone; return toRet; } void c_OnMessageArrivedEvent( int Id, string root, string guid, int TimeOut, out int ReturnCode ) { if ( !guid.Equals(currentGUID.ToString()) ) { _manualResetEvent2.Set(); ReturnCode = 0; return; } object newMessage; c.FetchMessage(Id, 7, out newMessage); currentRespone = newMessage.ToString(); ReturnCode = 0; _manualResetEvent2.Set(); } } I'm really noob in using waithandle, but my idea was to create an instance that sends the message and waits for an event. As soon as the event arrived, checks if the message is the one I expect (checking the unique guid), otherwise continues to wait for the next event. This because could be (and usually is in this way) a lot of clients working concurrently, and I want them to work parallel. As I implemented my stuff, at the moment if I run client 1, client 2 and client 3, client 2 starts sending message as soon as client 1 has finished, and client 3 as client 2 has finished: not what I'm trying to do. Can you help me to fix my code and get my target? Thanks!

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • C# getting the results from an asynchronous call

    - by Jim Beam
    I have an API that I'm working with and it has limited documentation. I have been told that some of the methods that it executes are called asynchronously. How can I get the result of these asynchronous calls. Note that I am not doing anything special to call them, the API handles the asynchronous part. But I can't seem to get a "reply" back from these calls - I'm assuming that's because they are in another thread.

    Read the article

  • How to access a field's value in an object using reflection

    - by kentcdodds
    My Question: How to overcome an IllegalAccessException to access the value of a an object's field using reflection. Expansion: I'm trying to learn about reflection to make some of my projects more generic. I'm running into an IllegalAccessException when trying to call field.getValue(object) to get the value of that field in that object. I can get the name and type just fine. If I change the declaration from private to public then this works fine. But in an effort to follow the "rules" of encapsulation I don't want to do this. Any help would be greatly appreciated! Thanks! My Code: package main; import java.lang.reflect.Field; public class Tester { public static void main(String args[]) throws Exception { new Tester().reflectionTest(); } public void reflectionTest() throws Exception { Person person = new Person("John Doe", "555-123-4567", "Rover"); Field[] fields = person.getClass().getDeclaredFields(); for (Field field : fields) { System.out.println("Field Name: " + field.getName()); System.out.println("Field Type: " + field.getType()); System.out.println("Field Value: " + field.get(person)); //The line above throws: Exception in thread "main" java.lang.IllegalAccessException: Class main.Tester can not access a member of class main.Tester$Person with modifiers "private final" } } public class Person { private final String name; private final String phoneNumber; private final String dogsName; public Person(String name, String phoneNumber, String dogsName) { this.name = name; this.phoneNumber = phoneNumber; this.dogsName = dogsName; } } } The Output: run: Field Name: name Field Type: class java.lang.String Exception in thread "main" java.lang.IllegalAccessException: Class main.Tester can not access a member of class main.Tester$Person with modifiers "private final" at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:95) at java.lang.reflect.AccessibleObject.slowCheckMemberAccess(AccessibleObject.java:261) at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:253) at java.lang.reflect.Field.doSecurityCheck(Field.java:983) at java.lang.reflect.Field.getFieldAccessor(Field.java:927) at java.lang.reflect.Field.get(Field.java:372) at main.Tester.reflectionTest(Tester.java:17) at main.Tester.main(Tester.java:8) Java Result: 1 BUILD SUCCESSFUL (total time: 0 seconds)

    Read the article

  • Is there a way to tell if a file is done copying?

    - by Mike Cooper
    The scenario is this: Machine A has files I want to copy to Machine C. Machine A can't access C directly, but can access Machine B that can access Machine C. I am using scp to copy from Machine A to B, and then from B to C. Machine B has limited storage space, so as files come in, I need to copy them to C and delete them from B. The second copy is much faster, so this is no problem with bandwidth. I could do this by hand, but I am lazy. What I would like is to run a script on B or C that will copy each file to C as each one finishes. The scp job is running from A. So what I need is a way to ask (preferably from a bash script) if file X.avi is "done" copying. Each of these files is a different size, and I can't really predict size or time of completion. Edit: by the way, the file transfer times are something about 1 hour from A to B and about 10 minutes from B to C, if time scale matters at all.

    Read the article

  • sql server 2008 takes alot of memory?

    - by Ahmed Said
    I making stress test on my database which is hosted on sqlserver 2008 64bit running on 64bit machine 10 GB of RAM. I have 400 threads each thread query the database for every second but the query time does not take time as the sql profiler says that, but after 18 hours sql takes 7.2 GB RAM and 7.2 on virtual memroy. Does is this normal behavior? and how can I adjust sql to clean up not in use memory?

    Read the article

  • ASP.NET Security Exception

    - by Neo1975
    I moved an ASP.NET application from a XP to a new server and now I have this exception: 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, On this line code: System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("it-IT"); My server is so configurated: Windows XP 2003 Sever SharePoint MS Visual Studio 2008 Team System Workgroup Server I tried to change security setting, machine.config. Someone can help me explain me where and how to change what? Thanks a lot.

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Making an SSD drive the primary boot/system drive

    - by David Ebbo
    [Not much of a hardware guy, so please excuse my ignorance :)] I just ordered an HP Pavilion Elite HPE-450t (desktop), which came with Win7 installed on the hard drive, using two partitions (C: and D:). Separately, I bought a 128GB SSD that I intend to use as my system drive. I got it in there and connected it, and right now, it's the J: drive (which was the first letter available in disk manager). My goal is for the SSD to get a clean OS install be the C: drive, and to clean out the other hard drive and make it D: (for misc data storage) Question #1: the motherboard has two SATA plugs. Does it matter which one I use for which drive? Question #2: what's the right way to install Win7 on the SSD in a way that it ends up being the C: drive? Do I need to switch some things around in the current Win7 that came with it, are can I do all that while installing Win7 on the SSD?

    Read the article

  • Multithreading consulting service

    - by Gustavo Paulillo
    Hello. I am creating a service, that needs to perform the following tasks: consult bank services and persist data into DB. The dificult is: Its needed to execute each process in parallel. I mean the better choice is implementing a multithreading service, running each instance per thread. But how its done? Thanks

    Read the article

  • Reliable 1tb or larger hard drive?

    - by jasondavis
    I am in the market for 2-3 new drives, I would like each to be at least 1tb to 2tb in size. I have been reading all the reviews on newegg.com for 1tb and larger drives and they all have 1 thing in common. Almost all the ones I read about have complaints of them being DOA or dieing within a few weeks of use. I am hoping to find some drives with this storage range that have a reputation for lasting a long time instead of a short life. Please help me if you have any experience with these sort of drives? Most the ones I read about were Western Digital brand. I realize some might complain that this questions answer would be based upon a timeframe, so if a user searches and find this answer a year from now it will be outdated but I would appreciate any help based on the current hard drives available as of April 10th, 2010 on newegg.com

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

  • Azure scalability over XML File

    - by dayscott
    What is the best practise solution for programmaticaly changing the XML file where the number of instances are definied ? I know that this is somehow possible with this csmanage.exe for the Windows Azure API. How can i measure which Worker Role VMs are actually working? I asked this question on MSDN Community forums as well: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/02ae7321-11df-45a7-95d1-bfea402c5db1

    Read the article

  • find contiguous stretches of equal data in a vector

    - by mariotomo
    I have a numeric vector, it contains patches of elements that are repeating, something like: R> data <- c(1,1,1,2,2,2,3,3,2,2,2,2,2,3,3,1,1,1,1,1) R> data [1] 1 1 1 2 2 2 3 3 2 2 2 2 2 3 3 1 1 1 1 1 R> I need to extract contiguous patches of elements equals to a specific value... but I'm only interested in the patch around a specific position. so, my input is: (1) the numeric vector, (2) the desired value, (3) the position. I want to return a logic vector indicating which positions satisfy the request. if at that position the data does not equal the value, I return all FALSE. possible outcomes that are not all F would be: [1] 1 1 1 2 2 2 3 3 2 2 2 2 2 3 3 1 1 1 1 1 [1] T T T F F F F F F F F F F F F F F F F F [2] F F F T T T F F F F F F F F F F F F F F [3] F F F F F F T T F F F F F F F F F F F F [4] F F F F F F F F T T T T T F F F F F F F [5] F F F F F F F F F F F F F T T F F F F F [6] F F F F F F F F F F F F F F F T T T T T

    Read the article

  • Jetty: Stopping programatically causes "1 threads could not be stopped"

    - by Ondra Žižka
    Hi, I have an embedded Jetty 6.1.26 instance. I want to shut it down by HTTP GET sent to /shutdown. So I created a JettyShutdownServlet: @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.setStatus(202, "Shutting down."); resp.setContentType("text/plain"); ServletOutputStream os = resp.getOutputStream(); os.println("Shutting down."); os.close(); resp.flushBuffer(); // Stop the server. try { log.info("Shutting down the server..."); server.stop(); } catch (Exception ex) { log.error("Error when stopping Jetty server: "+ex.getMessage(), ex); } However, when I send the request, Jetty does not stop - a thread keeps hanging in org.mortbay.thread.QueuedThreadPool on the line with this.wait(): // We are idle // wait for a dispatched job synchronized (this) { if (_job==null) this.wait(getMaxIdleTimeMs()); job=_job; _job=null; } ... 2011-01-10 20:14:20,375 INFO org.mortbay.log jetty-6.1.26 2011-01-10 20:14:34,756 INFO org.mortbay.log Started [email protected]:17283 2011-01-10 20:25:40,006 INFO org.jboss.qa.mavenhoe.MavenHoeApp Shutting down the server... 2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown [email protected]:17283 2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown org.mortbay.jetty.servlet.Context@1672bbb{/,null} 2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown org.mortbay.jetty.webapp.WebAppContext@18d30fb{/jsp,file:/home/ondra/work/Mavenhoe/trunk/target/classes/org/jboss/qa/mavenhoe/web/jsp} 2011-01-10 20:25:43,007 INFO org.mortbay.log Stopped [email protected]:17283 2011-01-10 20:25:43,009 WARN org.mortbay.log 1 threads could not be stopped 2011-01-10 20:26:43,010 INFO org.mortbay.log Shutdown hook executing 2011-01-10 20:26:43,011 INFO org.mortbay.log Shutdown hook complete It blocks for exactly one minute, then shuts down. I've added the Graceful shutdown, which should allow me to shut the server down from a servlet; However, it does not work as you can see from the log. I've solved it this way: Server server = new Server( PORT ); server.setGracefulShutdown( 3000 ); server.setStopAtShutdown(true); ... server.start(); if( server.getThreadPool() instanceof QueuedThreadPool ){ ((QueuedThreadPool) server.getThreadPool()).setMaxIdleTimeMs( 2000 ); } setMaxIdleTimeMs() needs to be called after the start(), becase the threadPool is created in start(). However, the threads are already created and waiting, so it only applies after all threads are used at least once. I don't know what else to do except some awfulness like interrupting all threads or System.exit(). Any ideas? Is there a good way? Thanks, Ondra

    Read the article

  • Optimal Instance Size for EC2 Sharepoint Server

    - by Rob Wilkerson
    I'm surprised that I can't find any info about this, but I'm not a Windows admin and just a novice EC2 user. I have a client who wants to stand up a Sharepoint server on EC2 for internal use. The team is small (10-20) folks and traffic will be light. Mostly, the client is looking for one place to store documents (and revisions of documents) while making access easy for authenticated users anywhere in the world. They've settled on Sharepoint and have other EC2 instances so that seems like the natural fit, but I'm trying to figure out what to recommend for them. I'm currently thinking about a Medium instance. I'm afraid to go smaller because I think Windows would need a fair amount of memory just to run, but I'm very open to suggestions. Any advice would be much appreciated. I expect that the storage itself would happen in an EBS mount, but again, suggestions welcome. Thanks for your input.

    Read the article

  • Determine target architecture of binary file in Linux (library or executable)

    - by Fernando Miguélez
    We have an issue related to a Java application running under a (rather old) FC3 on a Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI. Via C3 processor is suppossed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution in this case was to go with a i386 compiled version of Ubuntu distribution. The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i383 compiled version. The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that). My question is: Is there any tool or way to find out at what specific architecture is an binary file (executable or library) aimed provided that "file" does not give so much information?

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >