Search Results

Search found 3931 results on 158 pages for 'gwt places'.

Page 133/158 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Why should I override hashCode() when I override equals() method?

    - by Bragaadeesh
    Ok, I have heard from many places and sources that whenever I override the equals() method, I need to override the hashCode() method as well. But consider the following piece of code package test; public class MyCustomObject { int intVal1; int intVal2; public MyCustomObject(int val1, int val2){ intVal1 = val1; intVal2 = val2; } public boolean equals(Object obj){ return (((MyCustomObject)obj).intVal1 == this.intVal1) && (((MyCustomObject)obj).intVal2 == this.intVal2); } public static void main(String a[]){ MyCustomObject m1 = new MyCustomObject(3,5); MyCustomObject m2 = new MyCustomObject(3,5); MyCustomObject m3 = new MyCustomObject(4,5); System.out.println(m1.equals(m2)); System.out.println(m1.equals(m3)); } } Here the output is true, false exactly the way I want it to be and I dont care of overriding the hashCode() method at all. This means that hashCode() overriding is an option rather being a mandatory one as everyone says. I want a second confirmation.

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • How do I use Core Data with the Cocoa Text Input system?

    - by the Joel
    Hobbyist Cocoa programmer here. Have been looking around all the usual places, but this seems relatively under-explained: I am writing something a little out of the ordinary. It is much simpler than, but similar to, a desktop publishing app. I want editable text boxes on a canvas, arbitrarily placed. This is document-based and I’d really like to use Core Data. Now, The cocoa text-handling system seems to deal with a four-class structure: NSTextStorage, NSLayoutManager, NSTextContainer and finally NSTextView. I have looked into these and know how to use them, sort of. Have been making some prototypes and it works for simple apps. The problem arrives when I get into persistency. I don't know how to, by way of Cocoa Bindings or something else, store the contents of NSTextStorage (= the actual text) in my managed object context. I have considered overriding methods pairs like -words, -setWords: in these objects. This would let me link the words to a String, which I know how to store in Core Data. However, I’d have to override any method that affects the text - and that seems a little much. Thankful for any insights.

    Read the article

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • Axis2 SOAP Envelope Header Information

    - by BigZig
    I'm consuming a web service that places an authentication token in the SOAP envelope header. It appears (through looking at the samples that came with the WS WSDL) that if the stub is generated in .NET, this header information is exposed through a member variable in the stub class. However, when I generate my Axis2 java stub using WSDL2Java it doesn't appear to be exposed anywhere. What is the correct way to extract this information from the SOAP envelope header? WSDL: http://www.vbar.com/zangelo/SecurityService.wsdl C# Sample: using System; using SignInSample.Security; // web service using SignInSample.Document; // web service namespace SignInSample { class SignInSampleClass { [STAThread] static void Main(string[] args) { // login to the Vault and set up the document service SecurityService secSvc = new SecurityService(); secSvc.Url = "http://localhost/AutodeskDM/Services/SecurityService.asmx"; secSvc.SecurityHeaderValue = new SignInSample.Security.SecurityHeader(); secSvc.SignIn("Administrator", "", "Vault"); DocumentServiceWse docSvc = new DocumentServiceWse(); docSvc.Url = "http://localhost/AutodeskDM/Services/DocumentService.asmx"; docSvc.SecurityHeaderValue = new SignInSample.Document.SecurityHeader(); docSvc.SecurityHeaderValue.Ticket = secSvc.SecurityHeaderValue.Ticket; docSvc.SecurityHeaderValue.UserId = secSvc.SecurityHeaderValue.UserId; } } } The sample illustrates what I'd like to do. Notice how the secSvc instance has a SecurityHeaderValue member variable that is populated after a successful secSvc.SignIn() invocation. Here's some relevant API documentation regarding the SignIn method: Although there is no return value, a successful sign in will populate the SecurityHeaderValue of the security service. The SecurityHeaderValue information is then used for other web service calls.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • Refactoring or Rewriting Monolithic PHP Spaghetti Codebase

    - by nategood
    I've inherited a really poorly designed PHP spaghetti code project. It's been gaining a good bit of traffic recently and is starting to have performance issues on top of the poor monolithic code base. Its maxing out performance on a chunky 16GB dedicated machine when it really shouldn't be. I'm planning on doing some performance tweaks right off the bat to help the performance issue, but this still won't really help the horrible code base. The team is small but expecting to grow very soon. I've read Joel's article on the troubles of doing a complete rewrite and see the concerns. But how bad does the code base have to be before you consider a rewrite? There is PHP handling logic interjected into what one would usually consider a "view". Even worse, in some places SQL statements are in these same files! The only real separation of presentation and logic are a few PHP scripts that serve as function libraries. These scripts do most of the ORM stuff... if you can even call it that. Trying to slowly refractor this seems like a nightmare. Open to your thoughts and opinions... however not interested in hearing, "Run away, Run away!".

    Read the article

  • Recursive QuickSort suffering a StackOverflowException -- Need fresh eyes

    - by jon
    I am working on a Recursive QuickSort method implementation in a GenericList Class. I will have a second method that accepts a compareDelegate to compare different types, but for development purposes I'm sorting a GenericList<int I am recieving stackoverflow areas in different places depending on the list size. I've been staring at and tracing through this code for hours and probably just need a fresh pair of (more experienced)eyes. Definitely wanting to learn why it is broken, not just how to fix it. public void QuickSort() { int i, j, lowPos, highPos, pivot; GenericList<T> leftList = new GenericList<T>(); GenericList<T> rightList = new GenericList<T>(); GenericList<T> tempList = new GenericList<T>(); lowPos = 1; highPos = this.Count; if (lowPos < highPos) { pivot = (lowPos + highPos) / 2; for (i = 1; i <= highPos; i++) { if (this[i].CompareTo(this[pivot]) <= 0) leftList.Add(this[i]); else rightList.Add(this[i]); } leftList.QuickSort(); rightList.QuickSort(); for(i=1;i<=leftList.Count;i++) tempList.Add(leftList[i]); for(i=1;i<=rightList.Count;i++) tempList.Add(rightList[i]); this.items = tempList.items; this.count = tempList.count; } }

    Read the article

  • MS Access (Jet) transactions, workspaces & scope

    - by Eric G
    I am having trouble with committing a transaction (using Access 2003 DAO). It's acting as if I never had called BeginTrans -- I get error 3034 on CommitTrans, "You tried to commit or rollback a transaction without first beginning a transaction"; and the changes are written to the database (presumably because they were never wrapped in a transaction). However, BeginTrans is run, if you step through it. I am running it within the Access environment using the DBEngine(0) workspace. The tables I'm updating are all opened via a Jet database connection (to the same database) and updated using DAO.Recordset.update. The connection is opened before starting BeforeTrans. I'm not doing anything weird in the middle of the transaction like closing/opening connections or multiple workspaces etc. There is one nested transaction level (basically it's wrapping multiple transacted updates in an outer transaction, so if any fail they all fail). The inner transactions run without errors, it's the outer transaction that doesn't work. Here are a few things I've looked into and ruled out: The transaction is spread across several methods and BeginTrans and CommitTrans (and Rollback) are all in different places. But when I tried a simple test of running a transaction this way, it doesn't seem like this should matter. I thought maybe the database connection gets closed when it goes out of local scope, even though I have another 'global' reference to it (I'm never sure what DAO does with dbase connections to be honest). But this seems not to be the case -- right before the commit, the connection and its recordsets are alive (I can check their properties, EOF = False, etc.) My CommitTrans and Rollback are done within event callbacks. (Very basically, a parser program is throwing an 'onLoad' or 'onLoadFail' event at the end of parsing, which I am handling by either committing or rolling back the inserts I made during processing.) However, again, trying a simple test, it doesn't seem like this should matter. Any ideas why this isn't working for me? Thanks.

    Read the article

  • Problem with linking in gcc

    - by chitra
    I am compiling a program in which a header file is defined in multiple places. Contents of each of the header file is different, though the variable names are the same internal members within the structures are different . Now at the linking time it is picking up from a library file which belongs to a different header not the one which is used during compilation. Due to this I get an error at link time. Since there are so many libraries with the same name I don't know which library is being picked up. I have lot of oems and other customized libraries which are part of this build. I checked out the options in gcc which talks about selecting different library files to be included. But no where I am able to see an option which talks about which libraries are being picked up the linker. If the linker is able to find more than one library file name, then which does the linker pick up is something which I am not able to understand. I don't want to specify any path, rather I want to understand how the linker is resolving the multiple libraries that it is able to locate. I tried putting -v option, but that doesn't list out the path from which the gcc picks up the library. I am using gcc on linux. Any help in this regard is highly appreciated. Regards, Chitra

    Read the article

  • how do use Ninject with class libraries I am developing?

    - by Greg
    Hi, If I am working on a class library how do I make use of Ninject here? Ie from the internal class library point of view and also from the client code? For example: should the class library have it's own IOC set up, or should it always assume the client code will supply? if no (ie it's upto the client to have the IOC in place) then where is the mapping data stored here'. Is this mapping of the class library's functionality to be places in the client? have a reusable library that is available, that uses interfaces with classes that use the getInstance concept to create concrete classes for you to use, then in this case would that make sense on the client side to use the IOC container to create instances of these classes? Or is that really applying a double layer of abstraction? Q2 Or in the cases where I'm building the reusable library myself and want the client to use an IOC container, then in my reusable library would I then dispense with any overhead of having factories or "getInstance" methods to instantiate the classes in the client? (i.e. as the IOC container would do this no?)

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • OOP, Interface Design and Encapsulation

    - by Mau
    C# project, but it could be applied to any OO languages. 3 interfaces interacting: public interface IPublicData {} public /* internal */ interface IInternalDataProducer { string GetData(); } public interface IPublicWorker { IPublicData DoWork(); IInternalDataProducer GetInternalProducer(); } public class Engine { Engine(IPublicWorker worker) {} IPublicData Run() { DoSomethingWith(worker.GetInternalProducer().GetData()); return worker.DoWork(); } } Clearly Engine is parametric in the actual worker that does the job. A further source of parametrization is how we produce the 'internal data' via IInternalDataProducer. This implementation requires IInternalDataProducer to be public because it's part of the declaration of the public interface IPublicWorker. However, I'd like it to be internal since it's only used by the engine. A solution is make the IPublicWorker produce the internal data itself, but that's not very elegant since there's only a couple of ways of producing it (while there are many more worker implementations), therefore it's nice to delegate to a couple of separate concrete classes. Moreover, the IInternalDataProducer is used in more places inside the engine, so it's good for the engine to pass around the actual object. I'm looking for elegant ideas/patterns. Cheers :-)

    Read the article

  • Passing a parameter in a Report's Open Event to a parameter query (Access 2007)

    - by JPM
    Hi there, I would like to know if there is a way to set the parameters in an Access 2007 query using VBA. I am new to using VBA in Access, and I have been tasked with adding a little piece of functionality to an existing app. The issue I am having is that the same report can be called in two different places in the application. The first being on a command button on a data entry form, the other from a switchboard button. The report itself is based on a parameter query that has requires the user to enter a Supplier ID. The user would like to not have to enter the Supplier ID on the data entry form (since the form displays the Supplier ID already), but from the switchboard, they would like to be prompted to enter a Supplier ID. Where I am stuck is how to call the report's query (in the report's open event) and pass the SupplierID from the form as the parameter. I have been trying for a while, and I can't get anything to work correctly. Here is my code so far, but I am obviously stumped. Private Sub Report_Open(Cancel As Integer) Dim intSupplierCode As Integer 'Check to see if the data entry form is open If CurrentProject.AllForms("frmExample").IsLoaded = True Then 'Retrieve the SupplierID from the data entry form intSupplierCode = Forms![frmExample]![SupplierID] 'Call the parameter query passing the SupplierID???? DoCmd.OpenQuery "qryParams" Else 'Execute the parameter query as normal DoCmd.OpenQuery "qryParams"????? End If End Sub I've tried Me.SupplierID = intSupplierCode, and although it compiles, it bombs when I run it. And here is my SQL code for the parameter query: PARAMETERS [Enter Supplier] Long; SELECT Suppliers.SupplierID, Suppliers.CompanyName, Suppliers.ContactName, Suppliers.ContactTitle FROM Suppliers WHERE (((Suppliers.SupplierID)=[Enter Supplier])); I know there are ways around this problem (and probably an easy way as well) but like I said, my lack of experience using Access and VBA makes things difficult. If any of you could help, that would be great!

    Read the article

  • Good way to allow people to select a lot of things?

    - by jphenow
    I'm using jQuery, ASP.NET, SQL Server, and the other usual suspects to design a company CRM. After they put in contact info, notes, dates, places and so forth they have to be able to select many different people to be "CC'ed." A group of people will be required to be one either "CC'ed" or "ToDo." The rest of the people can be nothing or "CC" or "ToDo." Currently we have it set up as a huge databind to templates with radio buttons for each option. Looks like shit. Anyone have any suggestions? I'd like to use a template with a datasource and have a good way to retrieve their answers and use them. I'm leaning jQuery direction but like I said I'll need there to be up to 3 possible options for the people. This is going to be all opinion so I'm just looking for options. Just to re-clarify, this concept is similar to email but I don't want them to have to type anything in as it is a set group of names that they're allowed to select from. Looking for quick simple and pretty. somewhere in the range of 120 names.

    Read the article

  • How to use the `itemDoubleClicked(QTreeWidgetItem*,int)` signal in qtHaskell

    - by nano
    I want to use the itemDoubleClicked(QTreeWidgetItem*,int) signal in a Haskell program I'm writing where I am using qtHaskell for the GUI. To connect a function I have at other places done the following: dummyWidget <- myQWidget connectSlot object signal dummyWidget "customSlot()" $ f Where object is some QWidget and signal is a string giving the signal, e.g. "triggered()", and f is the function I want to be called when the signaled is send. The definition of connectSlot in the API is: class Qcs x where connectSlot :: QObject a -> String -> QObject b -> String -> x -> IO () where the instances ofQcs are: Qcs () Qcs (QObject c -> String -> IO ()) Qcs (QObject c -> Object d -> IO ()) Qcs (QObject c -> Bool -> IO ()) Qcs (QObject c -> Int -> IO ()) Qcs (QObject c -> IO ()) Qcs (QObject c -> OpenGLVersionFlag -> IO ()) The first Arguments passed is supposed to be the QObject of which I'm using a signal. As you can see, there is no instance where f, the function to connect to the signal, can have two further arguments to recieve the QWidget and the integer send by the signal. Is there a way to nevertheless connect that signal to a custom function?

    Read the article

  • UIImageView Centering Problem.

    - by sagar
    To understand my problem let's follow this steps. Open - XCode - New Project - View based application. Open View Controller.xib file. Drag an image-view into viewController's view. Place that image view on center of viewController's view. Apply size of 125x125 to image-view. Take any image with dimension of 320 width & 460 height. Drag that image into your application resources directory. Now, Again back to .xib File & select image-view Set image (which is in your resources) to image-view Set Image-view Mode to - Center Ok. Let me explain the situation now. after performing all above steps - image-view has an image which is larger than image-view's size. However, Image-view places that image on center & in View controller image-view is looking proper ( not scaled ). But When you run your application, image-view will be displayed in size of 320x460. ( but it is not actually scaled. ) Now, I can't find the reason, why interface builder is displaying something else & simulator/iphone is displaying something else ? My Query. I want the image's center portion within image-view no matter what image size is. Thanks in advance for sharing your knowledge. Sagar.

    Read the article

  • How do I implement configurations and settings?

    - by Malvolio
    I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default. So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live. Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples. The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.) What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?

    Read the article

  • How to see external libraries code when debugging

    - by Sanva
    Hello!! First of all... this is my message #1 in this place, so... please be nice with me ;) I just started recently to study Gnome apps/libraries and I found that debuggers are an excellent way to learn, because seeing the code running helps a lot in understanding the structure of the program. But I have a problem. For example, debugging gnome-panel I found a lot of calls to external functions (basically the GTK+ functions), and although pretending to see all the code of all the functions applications like this call would be crazy, there are a lot that will be very interesting to see in action. The problem is that the debugger hasn't the code of those libraries loaded and it can't show it to me —at most it shows the line number where the execution is. I'm using Nemiver and when it tries to enter in an external function it claims because it can't find a file it supposed to be somewhere. For example, trying to enter in gtk_window_set_default_icon_name it tries to load /build/buildd/gtk+2.0-2.16.1/gtk/gtkwindow.c, and calling XSetIOErrorHandler, ../../src/ErrHndlr.c. So now I think that I'm doing something wrong... Why Nevimer are looking for those source files in those places?? My system does not even have the /build/buildd/ folders... and I don't know if I'm doing something wrong or I need to install somethig or what. Any suggestion? How do you debug this kind of applications? Best regards and thanks a lot for your time —and forgive me if my English is bad.

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • Can not open ports in iptables on CentOS 5??

    - by abszero
    I am trying to open up ports in CentOS's firewall and am having a terrible go at it. I have followed the "HowTo" here: http://wiki.centos.org/HowTos/Network/IPTables as well as a few other places on the Net but I still can't get the bloody thing to work. Basically I wanted to get two things working: VNC and Apache over the internal network. The problem is that the firewall is blocking all attempts to connect to these services. Now if I issue service iptables stop and then try to access the server via VNC or hit the webserver everything works as expected. However the moment I turn iptables back on all of my access is blocked. Below is a truncated version of my iptables file as it appears in vi -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5801 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6001 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT Really I would just be happy if I could get port 80 opened up for Apache since I can do most stuff via putty but if I could figure out VNC as well that would be cool. As far as VNC goes there is just a single/user desktop that I am trying to connect to via: [ipaddress]:1 Any help would be greatly appreciated!

    Read the article

  • How To Deal With Exceptions In Large Code Bases

    - by peter
    Hi All, I have a large C# code base. It seems quite buggy at times, and I was wondering if there is a quick way to improve finding and diagnosing issues that are occuring on client PCs. The most pressing issue is that exceptions occur in the software, are caught, and even reported through to me. The problem is that by the time they are caught the original cause of the exception is lost. I.e. If an exception was caught in a specific method, but that method calls 20 other methods, and those methods each call 20 other methods. You get the picture, a null reference exception is impossible to figure out, especially if it occured on a client machine. I have currently found some places where I think errors are more likely to occur and wrapped these directly in their own try catch blocks. Is that the only solution? I could be here a long time. I don't care that the exception will bring down the current process (it is just a thread anyway - not the main application), but I care that the exceptions come back and don't help with troubleshooting. Any ideas? I am well aware that I am probably asking a question which sounds silly, and may not have a straightforward answer. All the same some discussion would be good.

    Read the article

  • Display two array's in the same table

    - by Naeem Ahmed
    $row = $query->fetchAll(PDO::FETCH_ASSOC); $num_rows = count($row); for ($i = 0; $i < $num_rows; $i++) { $title = htmlspecialchars($row[$i]['title']); $author =htmlspecialchars($row[$i]['author']); $school =htmlspecialchars($row[$i]['school']); $solution = $row[$i]['solution']; $notes = $row[$i]['notes']; $ad = array($title, $price, $author, $school, $contact, $content, $date); $inlcude = array($solutions, $notes); $field = 0; echo "<table border='1'>"; // foreach($inlcude as $in) This failled miserably foreach ($ad as $post) { if ($field < 3) //The first three values are placed in the first row { echo "<td>$post</td>"; } if ($field >= 3) { echo "<tr><td>$post</td><td>$in</td></tr>"; } $field++; } echo '</table>'; } I have two arrays and I would like to display them in different columns in my table. $ad displays perfectly fine but I'm having trouble displaying the contents in $inlcude in the second column. I've tried putting another foreach loop to iterate through contents of the second array but that really screws up my table by placing random values in different places on the table. Besides the foreach loop, I don't know of any other way to iterate through the array. Any suggestions would be appreciated.Thanks!

    Read the article

  • Trusted Folder/Drive Picker in the Browser

    - by kylepfritz
    I'd like to write a Folder/Drive picker the runs in the browser and allows a user to select files to upload to a webservice. The primary usage would be selecting folders or a whole CD and uploading them to the web with their directory structure in tact. I'm imagining something akin to Jumploader but which automatically enumerates external drives and CDs. I remember a version of Facebook's picture uploader that could do this sort of enumeration and was java-based but it has since been replaced by a much slicker plugin-based architecture. Because the application needs to run at very high trust, I think I'm limited to old-school java applets. Is there another alternative? I'm hesitant to start down the plugin route because of the necessity of writing one for both IE and Mozilla at a minimum. Are there good places to get started there? On the applet front, I built a clunky prototype to demonstrate that I can enumerate devices and list files. It runs fine in the applet viewer but I don't think I have the security settings configured correctly for it to run in the browser at full trust. Currently I don't get any drives back when I run it in the browser. Applet Prototype: public class Loader extends javax.swing.JApplet { ... private void EnumerateDrives(java.awt.event.ActionEvent evt) { File[] roots = File.listRoots(); StringBuilder b = new StringBuilder(); for (File root : roots) { b.append(root.getAbsolutePath() + ", "); } jLabel.setText(b.toString()); } } Embed Html: <p>Loader:</p> <script src="http://www.java.com/js/deployJava.js" type="text/javascript" ></script> <script> var attributes = {code:'org.exampl.Loader.Loader.class', archive:'Loader/dist/Loader.jar', width:600, height:400} ; var parameters = {}; deployJava.runApplet(attributes, parameters, '1.6');

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >