Search Results

Search found 21481 results on 860 pages for 'oracle business intellige'.

Page 842/860 | < Previous Page | 838 839 840 841 842 843 844 845 846 847 848 849  | Next Page >

  • drupal (CMS) or codeigniter (MVC) for creating a new web application?

    - by ajsie
    im going to create a new web application that is very customized. it will contain images, that are fully searchable - in a very, very customized way. when you click on the pictures you can add comments and so on. it requires users to be registered, but the registration/login process will be highly customized too. at the moment im using CodeIgniter for this. But i've read a lot of posts about CMS like Drupal and it sounds like i could let it handle basic stuff, maybe design and other front end work. i have no experience with CMS, in fact, i just started to use a MVC framework like CI and was impressed of how much easier it gets to start developing. so i wonder, if i'm going to create this kind of application, could i use drupal and then add the usual stuff, as i was going to do with CodeIgniter, like controllers, views, models, config files, my own libraries and so on? how does it work on a system like Drupal. how do you code PHP with it as with any MVC framework. it sounds like it has a lot of modules, i just wonder, if i can use it as a MVC framework but have the benefit of having all these basic stuff and design ready to use? cause then it sounds like the best "library" to provide for a web application from scratch. or is it difficult to create a customized app with it? i guess it has modules like images and users, but then how could i customize these so that every image has tags on it and country information, or have every user subscribing to changes to an image, that email will be sent to users and so on? cause i guess its easy to install a module. the question is, how do i customize it. maybe i don't need all that table columns. maybe i want to add/remove business logic. what are the pros and cons with using Drupal for this? is it even the right way to go? can you make a Stackoverflow with Drupal? Facebook? Twitter? Youtube? assuming that you know php of course. share your thoughts cause im totally new on creating a web application! thanks

    Read the article

  • NHibernate unable to create SessionFactory

    - by Tyler
    I'm having a bit of trouble setting up NHibernate, and I'm not too sure what the problem is exactly. I'm attempting to save a domain object to the database (Oracle 10g XE). However, I'm getting a TypeInitializationException while trying to create the ISessionFactory. Here is what my hibernate.cfg.xml looks like: <?xml version="1.0" encoding="utf-8"?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" > <session-factory name="MyProject.DataAccess"> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string"> User ID=myid;Password=mypassword;Data Source=localhost </property> <property name="show_sql">true</property> <property name="dialect">NHibernate.Dialect.OracleDialect</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <mapping resource="MyProject/Domain/User.hbm.xml"/> </session-factory> </hibernate-configuration> I created a DAO which I will use to persist domain objects to the database. The DAO uses a HibernateUtil class that creates the SessionFactory. Both classes are in the DataAccess namespace along with the Hibernate configuration. This is where the exception is occuring. Here's that class: public class HibernateUtil { private static ISessionFactory SessionFactory = BuildSessionFactory(); private static ISessionFactory BuildSessionFactory() { try { // This seems to be where the problem occurs return new Configuration().Configure().BuildSessionFactory(); } catch (TypeInitializationException ex) { Console.WriteLine("Initial SessionFactory creation failed." + ex); throw new Exception("Unable to create SessionFactory."); } } public static ISessionFactory GetSessionFactory() { return SessionFactory; } } The DataAccess namespace references the NHibernate DLLs. This is virtually the same setup I've used with Hibernate in Java, so I'm not entirely sure what I'm doing wrong here. Any ideas? Edit The innermost exception is the following: "Could not find file 'C:\Users\Tyler\Documents\Visual Studio 2010\Projects\MyProject\MyProject\ConsoleApplication\bin\Debug\hibernate.cfg.xml'." ConsoleApplication contains the entry point where I've created a User object and am trying to persist it with my DAO. Why is it looking for the configuration file there? The actual persisting takes place in the DAO, which is in DataAccess. Also, when I add the configuration file to ConsoleApplication, it still does not find it.

    Read the article

  • When to delete newly deprecated code?

    - by John
    I spent a month writing an elaborate payment system that handles both credit card payments and electronic fund transfers. My work was used on production server for about a month. I was told recently by the client that he no longer wants to use the electronic fund transfer feature. Because the way I had to interface and communicate with the credit card gateway is drastically different from the electronic fund transfer api (eg. the cc company gives transaction responses immediately after an http request, while the eft company gives transaction responses 5 business days after an http request), I spent a lot of time writing my own API to abstract common function calls like function payment(amount, pay_method,pay_freq) function updateRecurringSchedule(user_id,new_schedule) etc.. Now that the client wants to abandon the EFT feature, all my work for this abstracted payments API is obsolete. I'm deliberating over whether I should scrap my work. Here's my pro vs. con for scrapping it now: PRO 1: Eliminate code bloat PRO 2: New developers do not need to learn MY API. They only need to read the CC company's API PRO 3: Because the EFT company did not handle recurring payment schedules, refunds, and validation, I wrote my own application to do it. Although the CC company's API permitted this functionality, I opted to use mine instead so that I could streamline my code. now that EFT is out of the picture, I can delete all this confusing code and just rely on the CC company's sytsem to manage recurring billing, payment schedules, refunds, validations etc... CON 1: Although I can just delete the EFT code, it still takes time to remove the entire framework consolidates different payment systems. CON 2: with regards to PRO 3, it takes time to build functionality that integrates the payment system more closely with the CC company. CON 3: I feel insecure deleting all this work. I don't think I'll ever use it again. But, for some inexplicable reason, I just don't feel comfortable deleting this work "right now". So my question is, should I delete one month's worth recent development? If yes, should I do it immediately or wait X amount of time before doing so?

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • No unique bean of type [javax.persistence.EntityManager] is defined

    - by sebajb
    I am using JUnit 4 to test Dao Access with Spring (annotations) and JPA (hibernate). The datasource is configured through JNDI(Weblogic) with an ORacle(Backend). This persistence is configured with just the name and a RESOURCE_LOCAL transaction-type The application context file contains notations for annotations, JPA config, transactions, and default package and configuration for annotation detection. I am using Junit4 like so: ApplicationContext <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="workRequest"/> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="${database.target}"/> <property name="showSql" value="${database.showSql}" /> <property name="generateDdl" value="${database.generateDdl}" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>workRequest</value> </property> <property name="jndiEnvironment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://localhost:7001</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> JUnit TestCase @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) public class AssignmentDaoTest { private AssignmentDao assignmentDao; @Test public void readAll() { assertNotNull("assignmentDao cannot be null", assignmentDao); List assignments = assignmentDao.findAll(); assertNotNull("There are no assignments yet", assignments); } } regardless of what changes I make I get: No unique bean of type [javax.persistence.EntityManager] is defined Any hint on what this could be. I am running the tests inside eclipse.

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • What are the pros and cons of using manual list iteration vs recursion through fail

    - by magus
    I come up against this all the time, and I'm never sure which way to attack it. Below are two methods for processing some season facts. What I'm trying to work out is whether to use method 1 or 2, and what are the pros and cons of each, especially large amounts of facts. methodone seems wasteful since the facts are available, why bother building a list of them (especially a large list). This must have memory implications too if the list is large enough ? And it doesn't take advantage of Prolog's natural backtracking feature. methodtwo takes advantage of backtracking to do the recursion for me, and I would guess would be much more memory efficient, but is it good programming practice generally to do this? It's arguably uglier to follow, and might there be any other side effects? One problem I can see is that each time fail is called, we lose the ability to pass anything back to the calling predicate, eg. if it was methodtwo(SeasonResults), since we continually fail the predicate on purpose. So methodtwo would need to assert facts to store state. Presumably(?) method 2 would be faster as it has no (large) list processing to do? I could imagine that if I had a list, then methodone would be the way to go.. or is that always true? Might it make sense in any conditions to assert the list to facts using methodone then process them using method two? Complete madness? But then again, I read that asserting facts is a very 'expensive' business, so list handling might be the way to go, even for large lists? Any thoughts? Or is it sometimes better to use one and not the other, depending on (what) situation? eg. for memory optimisation, use method 2, including asserting facts and, for speed use method 1? season(spring). season(summer). season(autumn). season(winter). % Season handling showseason(Season) :- atom_length(Season, LenSeason), write('Season Length is '), write(LenSeason), nl. % ------------------------------------------------------------- % Method 1 - Findall facts/iterate through the list and process each %-------------------------------------------------------------- % Iterate manually through a season list lenseason([]). lenseason([Season|MoreSeasons]) :- showseason(Season), lenseason(MoreSeasons). % Findall to build a list then iterate until all done methodone :- findall(Season, season(Season), AllSeasons), lenseason(AllSeasons), write('Done'). % ------------------------------------------------------------- % Method 2 - Use fail to force recursion %-------------------------------------------------------------- methodtwo :- % Get one season and show it season(Season), showseason(Season), % Force prolog to backtrack to find another season fail. % No more seasons, we have finished methodtwo :- write('Done').

    Read the article

  • Tab Bar and Nav Controller: Where did I go wrong in my Interface Builder wiring?

    - by editor
    Even if you don't know how I've shot myself in the foot, a story which I've tried to lay out below, if you think I've done a good job showing the parameters of my problem I'd appreciate an upvote so that I might be able to grab some attention for my question. I've been working on an iPhone application in XCode and Interface Builder of the Tab Bar project type. After getting a table view of topics (business sectors) working fine I realized that I would need to add a Navigation Control to allow the user to drill into a subtopics (subsectors) table. As a green Objective-C developer, that was confusing, but I managed to get it working by reading various documentation trying out a few different IB options. My current setup is a Tab Bar Controller with Tab 1 as a Navigation Controller and Tab 2 a plain view with a Table View placed into it. The wiring works: I can log when table rows are selected and I'm ready to push a new View Controller onto the stack so that I can display the subtopics Table View. My problem: For some reason the first tab's Table View is a delegate and dataSource of the second ta. It doesn't make sense to me and I can't figure out why that's the only setup that works. Here is the wiring: Navigation Controller (Sectors) is a delegate of Tab Bar Navigation Bar is a delegate of Navigation Controller (Sectors) View Contoller (Sectors) has a view of Table View Table View (in Navigation Controller (Sectors)) is a delegate of First View Controller (Companies) Table View (in Navigation Controller (Sectors)) is a dataSource outlet of First View Controller (Companies) First View Controller (Companies) First View Contoller (Sectors) has a view of Table View Table View (in First View Controller (Companies)) is not hooked up to a dataSource outlet and is not a delegate When I click the tab buttons and look at the Inspector I see that the first tab is correctly hooked up to my MainWindow.xib and the second tab has selected a nib called SecondView.xib. It's in the File's Owner of MainWindow.xib where I inherit UITableViewDataSource and UITableViewDelegate (and also UITabBarControllerDelegate) in the .h, and in the .m where I implement the delegate methods. Why does this setup only work when the Table View in my first tab (View Controller (Sectors)) is a delegate and dataSource of the second tab? I'm confused: why wouldn't it need to be hooked up to the Navigation Controller-enabled tab in which the Table View is seen (Navigation Controller (Sectors))? The Table View seen on the second tab has neither dataSource and is not a delegate. I'm having trouble getting a pushViewController to fire (self.navigationController is not nil but the new View Controller still doesn't load) and I suspect that I need to work out this IB wiring issue before I can troubleshoot why the Nav Controller won't push a new View Controller onto the stack. if(nil == self.navigationController) { NSLog(@"self.navigationController is nil."); } else { NSLog(@"self.navigationController is not nil."); SectorList *subsectorViewController = [[SectorList alloc] initWithNibName:@"SectorList" bundle:nil]; subsectorViewController.title = @"Subsectors"; [[self navigationController] pushViewController:subsectorViewController animated:YES]; [subsectorViewController release]; }

    Read the article

  • Is it correct or incorrect for a Java JAR to contain its own dependencies?

    - by 4herpsand7derpsago
    I guess this is a two-part question. I am trying to write my own Ant task (MyFirstTask) that can be used in other project's build.xml buildfiles. To do this, I need to compile and package my Ant task inside its own JAR. Because this Ant task that I have written is fairly complicated, it has about 20 dependencies (other JAR files), such as using XStream for OX-mapping, Guice for DI, etc. I am currently writing the package task in the build.xml file inside the MyFirstTask project (the buildfile that will package myfirsttask.jar, which is the reusable Ant task). I am suddenly realizing that I don't fully understand the intention of a Java JAR. Is it that a JAR should not contain dependencies, and leave it to the runtime configuration (the app container, the runtime environment, etc.) to supply it with the dependencies it needs? I would assume if this is the case, an executable JAR is an exception to the rule, yes? Or, is it the intention for Java JARs to also include their dependencies? Either way, I don't want to be forcing my users to be copying-n-pasting 25+ JARs into their Ant libs; that's just cruel. I like the way WAR files are set up, where the classpath for dependencies is defined under the classes/ directory. I guess, ultimately, I'd like my JAR structure to look like: myfirsttask.jar/ com/ --> the root package of my compiled binaries config/ --> config files, XML, XSD, etc. classes/ --> all dependencies, guice-3.0.jar, xstream-1.4.3.jar, etc. META-INF/ MANIFEST.MF I assume that in order to accomplish this (and get the runtime classpath to also look into the classes/ directory), I'll need to modify the MANIFEST.MF somehow (I know there's a manifest attribute called ClassPath, I believe?). I'm just having a tough time putting everything together, and have a looming/lingering question about the very intent of JARs to begin with. Can someone please confirm whether Oracle intends for JARs to contain their dependencies or not? And, either way, what I would have to do in the manifest (or anywhere else) to make sure that, at runtime, the classpath can find the dependencies stored under the classes/ directory? Thanks in advance!

    Read the article

  • Generate A Simple Read-Only DAL?

    - by David
    I've been looking around for a simple solution to this, trying my best to lean towards something like NHibernate, but so far everything I've found seems to be trying to solve a slightly different problem. Here's what I'm looking at in my current project: We have an IBM iSeries database as a primary repository for a third party software suite used for our core business (a financial institution). Part of what my team does is write applications that report on or key off of a lot of this data in some way. In the past, we've been manually creating ADO .NET connections (we're using .NET 3.5 and Visual Studio 2008, by the way) and manually writing queries, etc. Moving forward, I'd like to simplify the process of getting data from there for the development team. Rather than creating connections and queries and all that each time, I'd much rather a developer be able to simply do something like this: var something = (from t in TableName select t); And, ideally, they'd just get some IQueryable or IEnumerable of generated entities. This would be done inside a new domain core that I'm building where these entities would live and the applications would interface with it through a request/response service layer. A few things to note are: The entities that correspond to the database tables should be generated once and we'd prefer to manually keep them updated over time. That is, if columns/tables are added to the database then we shouldn't have to do anything. (If some are deleted, of course, it will break, but that's fine.) But if we need to use a new column, we should be able to just add it to the necessary class(es) without having to re-gen the whole thing. The whole thing should be SELECT-only. We're not doing a full DAL here because we don't want to be able to break anything in the database (even accidentally). We don't need any kind of mapping between our domain objects and the generated entity types. The domain barely covers a fraction of the data that's in there, most of it we'll never need, and we would rather just create re-usable maps manually over time. I already have a logical separation for the DAL where my "repository" classes return domain objects, I'm just looking for a better alternative to manual ADO to be used inside the repository classes. Any suggestions? It seems like what I'm doing is just enough outside the normal demand for DAL/ORM tools/tutorials online that I haven't been able to find anything. Or maybe I'm just overlooking something obvious?

    Read the article

  • Architecture Suggestions/Recommendations for a Web Application with Sub-Apps

    - by user579218
    Hello. I’m starting to plan an architecture for a big web application, and I wanted to get suggestions and/or recommendations on where to begin and which technologies and/or frameworks to use. The application will be an Intranet-based web site using Windows authentication, running on IIS and using SQL Server and ASP.NET. It’ll need to be structured as a main/shell application with sub-applications that are “pluggable” based on some configuration settings. The main or shell application is to provide the overall user interface structure – header/footer, dynamically built tabs for each available sub-app, and a content area in which the sub-application will be loaded when the user clicks on the sub-application’s tab. So, on start-up of the main/shell application, configuration information will be queried from a database, and, based on the user and which of the sub-apps are available, the main or shell app would dynamically build tabs (or buttons or something) as a way to access each individual application. On start-up, the content area will be populated with the “home” sub-app. But, clicking on an sub-app tab will cause the content area to be populated with the sub-app corresponding to the tab. For example, we’re going to have a reports application, a display application, and probably a couple other distinct applications. On startup of the main/shell application, after determining who the user is, the main app will query the database to determine which sub-apps the user can use and build out the UI. Then the user can navigate between available sub-apps and do their work in each. Finally, the entire app and all sub-apps need to be a layered design with presentation, service, business, and data access layers, as well as cross-cutting objects for things such as logging, exception handling, etc. Anyway, my questions revolve around where to begin to plan something like this application. What technologies/frameworks would work best in developing a solution for this application? MVC? MVP? WCSF? EF? NHibernate? Enterprise Library? Repository Pattern? Others???? I know all these technologies/frameworks are not used for the same purpose, but knowing which ones to focus on is a little overwhelming. Which ones would be the best choice(s) for a solution? Which ones work well together for an end-to-end design? How would one structure the VS project for something like this? Thanks!

    Read the article

  • A two player game over the intranet..

    - by Santwana
    Hi everybody.. I am a student of 3rd year engineering and only a novice in my programming skills. I need some help with my project.. I wish to develop a two player game to be played over the network (Intranet). I want to develop a simple website with a few html pages for this.My ideas for the project run as follows: 1.People can log in from different systems and check who ever is online on the network currently. the page also shows who is playing with whom. 2.If a person is interested in playing with a player who is currently online, he sends a request of which the other player is somehow notified( using a message or an alert on his profile page..) 3.If the player accepts the request, a game is started. This is exactly where I am clueless.. How can I make them play the game? I need to develop a turn based game with two players, eg chessboard.. how can I do this? The game has to be played live.. and it is time tracked. i need your help with coding the above.. the other features i wish to include are: 4.The game could not be abruptly terminated by any one if the users.The request to terminate the game should be sent to the other player first and only if he accepts can the game be terminated. Whoever wins the game would get a plus 10 on their credit and if he terminated he gets a minus 10. The credits remains constant even if he loses but the success percentage is reduced. 6.The player with highest winning percentage is projected as the player of the week on the home page and he can post a challenge to all others.. I only have an intermediate knowledge of core java and know the basics of Swing and Awt. I am not at all familiar with networking in java right now. I have 5 to 6 weeks of time for developing the project but I hope to learn the things before I start my project. i would prefer to use a lan to illustrate the project and I know only java,jsp,oracle,html and bit of xml to develop my proj. Also I wish to know if I can code this within 6 weeks, would it be too difficult or complicated? Please spare some time to tell me. Please.. please.. I need your suggestions and help.. thank you so much..

    Read the article

  • Strange Java Socket Behavior (Connects, but Doesn't Send)

    - by Donald Campbell
    I have a fairly complex project that boils down to a simple Client / Server communicating through object streams. Everything works flawlessly for two consecutive connections (I connect once, work, disconnect, then connect again, work, and disconnect). The client connects, does its business, and then closes. The server successfully closes both the object output stream and the socket, with no IO errors. When I try to connect a third time, the connection appears to go through (the ServerSocket.accept() method goes through and an ObjectOutputStream is successfully created). No data is passed, however. The inputStream.readUnshared() method simply blocks. I have taken the following memory precautions: When it comes time to close the sockets, all running threads are stopped, and all objects are nulled out. After every writeUnshared() method call, the ObjectOutputBuffer is flushed and reset. Has anyone encountered a similar problem, or does anyone have any suggestions? I'm afraid my project is rather large, and so copying code is problematic. The project boils down to this: SERVER MAIN ServerSocket serverSocket = new ServerSocket(port); while (true) { new WorkThread(serverSocket.accept()).start(); } WORK THREAD (SERVER) public void run() { ObjectInputBuffer inputBuffer = new ObjectInputBuffer(new BufferedInputStream(socket.getInputStream())); while (running) { try { Object myObject = inputBuffer.readUnshared(); // do work is not specified in this sample doWork(myObject); } catch (IOException e) { running = false; } } try { inputBuffer.close(); socket.close(); } catch (Exception e) { System.out.println("Could not close."); } } CLIENT public Client() { Object myObject; Socket mySocket = new Socket(address, port); try { ObjectOutputBuffer output = new ObjectOutputBuffer(new BufferedOutputStream(mySocket.getOutputStream())); output.reset(); output.flush(); } catch (Exception e) { System.out.println("Could not get an input."); mySocket.close(); return; } // get object data is not specified in this sample. it simply returns a serializable object myObject = getObjectData(); while (myObject != null) { try { output.writeUnshared(myObject); output.reset(); output.flush(); } catch (Exception e) { e.printStackTrace(); break; } // catch } // while try { output.close(); socket.close(); } catch (Exception e) { System.out.println("Could not close."); } } Thank you to everyone who may be able to help!

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • Converting text into numeric in xls using Java

    - by Work World
    When I create excel sheet through java ,the column which has number datatype in the oracle table, get converted to text format in excel.I want it to remain in the number format.Below is my code snippet for excel creation. FileWriter fw = new FileWriter(tempFile.getAbsoluteFile(),true); // BufferedWriter bw = new BufferedWriter(fw); HSSFWorkbook wb = new HSSFWorkbook(); HSSFSheet sheet = wb.createSheet("Excel Sheet"); //Column Size of excel for(int i=0;i<10;i++) { sheet.setColumnWidth((short) i, (short)8000); } String userSelectedValues=result; HSSFCellStyle style = wb.createCellStyle(); ///HSSFDataFormat df = wb.createDataFormat(); style.setFillForegroundColor(HSSFColor.GREY_25_PERCENT.index); style.setFillPattern(HSSFCellStyle.SOLID_FOREGROUND); //style.setDataFormat(df.getFormat("0")); HSSFFont font = wb.createFont(); font.setColor(HSSFColor.BLACK.index); font.setBoldweight((short) 700); style.setFont(font); int selecteditems=userSelectedValues.split(",").length; // HSSFRow rowhead = sheet.createRow((short)0); //System.out.println("**************selecteditems************" +selecteditems); for(int k=0; k<selecteditems;k++) { HSSFRow rowhead = sheet.createRow((short)k); if(userSelectedValues.contains("O_UID")) { HSSFCell cell0 = rowhead.createCell((short) k); cell0.setCellValue("O UID"); cell0.setCellStyle(style); k=k+1; } ///some columns here.. } int index=1; for (int i = 0; i<dataBeanList.size(); i++) { odb=(OppDataBean)dataBeanList.get(i); HSSFRow row = sheet.createRow((short)index); for(int j=0;j<selecteditems;j++) { if(userSelectedValues.contains("O_UID")) { row.createCell((short)j).setCellValue(odb.getUID()); j=j+1; } } index++; } FileOutputStream fileOut = null; try { fileOut = new FileOutputStream(path.toString()+"/temp.xls"); } catch (FileNotFoundException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } try { wb.write(fileOut); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fileOut.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • Grails: GGTS not running on Amazon AWS EC2 anyone else successful?

    - by Anonymous Human
    Im just curious if anyone has had success trying to run the Groovy Grails tool suite on an Amazon AWS EC2 instance with its display exported into your windows machine. If so, I wanted to know which flavor of linux was used on the EC2. I am not having much success with it on the Amazon Linux but haven't tried their Ubuntu instances yet. I got all the way to getting GGTS installed and getting the display exported but when I launch GGTS I get log errors about libraries missing. This is most likely because I didn't use yum to install it so I am probably missing dependencies but I didn't have a choice its not offered as a yum package. Here are my log file errors when I try to launch GGTS: !SESSION 2014-06-08 03:08:04.873 ----------------------------------------------- eclipse.buildId=3.5.1.201405030657-RELEASE-e43 java.version=1.7.0_55 java.vendor=Oracle Corporation Framework arguments: -product org.springsource.ggts.ide Command-line arguments: -os linux -ws gtk -arch x86_64 -product org.springsourc e.ggts.ide !ENTRY org.eclipse.osgi 4 0 2014-06-08 03:08:12.116 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: /home/ec2-user/ggts_sh/ggts-3.5.1.RELEASE/configuration/org.eclipse.osgi /bundles/704/1/.cp/libswt-pi-gtk-4335.so: libgtk-x11-2.0.so.0: cannot open share d object file: No such file or directory no swt-pi-gtk in java.library.path /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk-4335.so: libgtk-x11-2 .0.so.0: cannot open shared object file: No such file or directory Can't load library: /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk.s o at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:45) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:679) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay( IDEApplication.java:154) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEAppli cation.java:96) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandl e.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runAppli cation(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(Ec lipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:354) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591) at org.eclipse.equinox.launcher.Main.run(Main.java:1450) at org.eclipse.equinox.launcher.Main.main(Main.java:1426)

    Read the article

  • I am having trouble using jquery to submit a form. It was working before

    - by noah
    When a user clicks a link it uses jquery ajax to submit a form to go to paypal. Not working for some reason. Really appreciate any help... LINK TO CLICK I put this in an href for onClick: javascript:go_paypal(); CODE TO EXECUTE ON CLICK function go_paypal() { data = 'req_paypal=1'; $.blockUI({ message: '<h1> Going to Paypal...</h1>',css:{background:'#000'} }); $.ajax({ type: "POST", url: "index.php", data: data, success: function(data) { $("#paypal_form").html(data); $("#payPalForm").submit(); } , error: function() {$.unblockUI(); alert('Unable to communicate to server.'); } }); return false; } CODE TO GO ON SUBMIT if(isset($_POST['req_paypal']) && $_POST['req_paypal'] == 1 ) { $sql = 'INSERT INTO `transactions` (id,type,ip,time,ammount,status) VALUES (NULL,1,\''.$_SERVER['REMOTE_ADDR'].'\',\''.time().'\',\''.$global['paypal_prod_amount'].'\',0) '; echo $sql; mysql_query($sql); $id = mysql_insert_id(); $html = ' <form action="https://www.sandbox.paypal.com/cgi-bin/webscr" method="post" id="payPalForm"> <input type="hidden" name="item_number" value="One Year of Imgur Pro"> <input type="hidden" name="cmd" value="_xclick"> <input type="hidden" name="no_note" value="1"> <input type="hidden" name="business" value="'.$global['paypal_email'].'"> <input type="hidden" name="custom" value="'.base64_encode($id).'"> <input type="hidden" name="currency_code" value="USD"> <input type="hidden" name="return" value="'.$global['paypal_return'].'"> <input name="item_name" type="hidden" id="item_name" value="One Year of Imgur Pro" > <input name="amount" type="hidden" id="amount" value="'.$global['paypal_prod_amount'].'" > </form> '; echo $html;exit; }

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Domain-Driven-Design question

    - by Michael
    Hello everyone, I have a question about DDD. I'm building a application to learn DDD and I have a question about layering. I have an application that works like this: UI layer calls = Application Layer - Domain Layer - Database Here is a small example of how the code looks: //****************UI LAYER************************ //Uses Ioc to get the service from the factory. //This factory would be in the MyApp.Infrastructure.dll IImplementationFactory factory = new ImplementationFactory(); //Interface and implementation for Shopping Cart service would be in MyApp.ApplicationLayer.dll IShoppingCartService service = factory.GetImplementationFactory<IShoppingCartService>(); //This is the UI layer, //Calling into Application Layer //to get the shopping cart for a user. //Interface for IShoppingCart would be in MyApp.ApplicationLayer.dll //and implementation for IShoppingCart would be in MyApp.Model. IShoppingCart shoppingCart = service.GetShoppingCartByUserName(userName); //Show shopping cart information. //For example, items bought, price, taxes..etc ... //Pressed Purchase button, so even for when //button is pressed. //Uses Ioc to get the service from the factory again. IImplementationFactory factory = new ImplementationFactory(); IShoppingCartService service = factory.GetImplementationFactory<IShoppingCartService>(); service.Purchase(shoppingCart); //**********************Application Layer********************** public class ShoppingCartService : IShoppingCartService { public IShoppingCart GetShoppingCartByUserName(string userName) { //Uses Ioc to get the service from the factory. //This factory would be in the MyApp.Infrastructure.dll IImplementationFactory factory = new ImplementationFactory(); //Interface for repository would be in MyApp.Infrastructure.dll //but implementation would by in MyApp.Model.dll IShoppingCartRepository repository = factory.GetImplementationFactory<IShoppingCartRepository>(); IShoppingCart shoppingCart = repository.GetShoppingCartByUserName(username); //Do shopping cart logic like calculating taxes and stuff //I would put these in services but not sure? ... return shoppingCart; } public void Purchase(IShoppingCart shoppingCart) { //Do Purchase logic and calling out to repository ... } } I've seem to put most of my business rules in services rather than the models and I'm not sure if this is correct? Also, i'm not completely sure if I have the laying correct? Do I have the right pieces in the correct place? Also should my models leave my domain model? In general I'm I doing this correct according DDD? Thanks!

    Read the article

  • How solve consumer/producer task using semaphores

    - by user1074896
    I have SimpleProducerConsumer class that illustrate consumer/producer problem (I am not sure that it's correct). public class SimpleProducerConsumer { private Stack<Object> stack = new Stack<Object>(); private static final int STACK_MAX_SIZE = 10; public static void main(String[] args) { SimpleProducerConsumer pc = new SimpleProducerConsumer(); new Thread(pc.new Producer(), "p1").start(); new Thread(pc.new Producer(), "p2").start(); new Thread(pc.new Consumer(), "c1").start(); new Thread(pc.new Consumer(), "c2").start(); new Thread(pc.new Consumer(), "c3").start(); } public synchronized void push(Object d) { while (stack.size() >= STACK_MAX_SIZE) try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } stack.push(new Object()); System.out.println("push " + Thread.currentThread().getName() + " " + stack.size()); notify(); } public synchronized Object pop() { while (stack.size() == 0) try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } try { Thread.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stack.pop(); System.out.println("pop " + Thread.currentThread().getName() + " " + stack.size()); notify(); return null; } class Consumer implements Runnable { @Override public void run() { while (true) { pop(); } } } class Producer implements Runnable { @Override public void run() { while (true) { push(new Object()); } } } } I found simple realization of semaphore(here:http://docs.oracle.com/javase/tutorial/essential/concurrency/guardmeth.html I know that there is concurrency package) How I need to change code to exchange java objects monitors to my custom semaphore. (To illustrate C/P problem using semaphores) Semaphore: class Semaphore { private int counter; public Semaphore() { this(0); } public Semaphore(int i) { if (i < 0) throw new IllegalArgumentException(i + " < 0"); counter = i; } public synchronized void release() { if (counter == 0) { notify(); } counter++; } public synchronized void acquire() throws InterruptedException { while (counter == 0) { wait(); } counter--; } }

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeffrey
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • Figuring out QuadCurveTo's parameters

    - by Fev
    Could you guys help me figuring out QuadCurveTo's 4 parameters , I tried to find information on http://docs.oracle.com/javafx/2/api/javafx/scene/shape/QuadCurveTo.html, but it's hard for me to understand without picture , I search on google about 'Quadratic Bezier' but it shows me more than 2 coordinates, I'm confused and blind now. I know those 4 parameters draw 2 lines to control the path , but how we know/count exactly which coordinates the object will throught by only knowing those 2 path-controller. Are there some formulas? import javafx.animation.PathTransition; import javafx.animation.PathTransition.OrientationType; import javafx.application.Application; import static javafx.application.Application.launch; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.paint.Color; import javafx.scene.shape.MoveTo; import javafx.scene.shape.Path; import javafx.scene.shape.QuadCurveTo; import javafx.scene.shape.Rectangle; import javafx.stage.Stage; import javafx.util.Duration; public class _6 extends Application { public Rectangle r; @Override public void start(final Stage stage) { r = new Rectangle(50, 80, 80, 90); r.setFill(javafx.scene.paint.Color.ORANGE); r.setStrokeWidth(5); r.setStroke(Color.ANTIQUEWHITE); Path path = new Path(); path.getElements().add(new MoveTo(100.0f, 400.0f)); path.getElements().add(new QuadCurveTo(150.0f, 60.0f, 100.0f, 20.0f)); PathTransition pt = new PathTransition(Duration.millis(1000), path); pt.setDuration(Duration.millis(10000)); pt.setNode(r); pt.setPath(path); pt.setOrientation(OrientationType.ORTHOGONAL_TO_TANGENT); pt.setCycleCount(4000); pt.setAutoReverse(true); pt.play(); stage.setScene(new Scene(new Group(r), 500, 700)); stage.show(); } public static void main(String[] args) { launch(args); } } You can find those coordinates on this new QuadCurveTo(150.0f, 60.0f, 100.0f, 20.0f) line, and below is the picture of Quadratic Bezier

    Read the article

  • Should the argument be passed by reference in this .net example?

    - by Hamish Grubijan
    I have used Java, C++, .Net. (in that order). When asked about by-value vs. by-ref on interviews, I have always done well on that question ... perhaps because nobody went in-depth on it. Now I know that I do not see the whole picture. I was looking at this section of code written by someone else: XmlDocument doc = new XmlDocument(); AppendX(doc); // Real name of the function is different AppendY(doc); // ditto When I saw this code, I thought: wait a minute, should not I use a ref in front of doc variable (and modify AppendX/Y accordingly? it works as written, but made me question whether I actually understand the ref keyword in C#. As I thought about this more, I recalled early Java days (college intro language). A friend of mine looked at some code I have written and he had a mental block - he kept asking me which things are passed in by reference and when by value. My ignorant response was something like: Dude, there is only one kind of arg passing in Java and I forgot which one it is :). Chill, do not over-think and just code. Java still does not have a ref does it? Yet, Java hackers seem to be productive. Anyhow, coding in C++ exposed me to this whole by reference business, and now I am confused. Should ref be used in the example above? I am guessing that when ref is applied to value types: primitives, enums, structures (is there anything else in this list?) it makes a big difference. And ... when applied to objects it does not because it is all by reference. If things were so simple, then why would not the compiler restrict the usage of ref keyword to a subset of types. When it comes to objects, does ref serve as a comment sort of? Well, I do remember that there can be problems with null and ref is also useful for initializing multiple elements within a method (since you cannot return multiple things with the same easy as you would do in Python). Thanks.

    Read the article

  • javascript ul button dropdown crashes after if doing more than 1 button

    - by Apprentice Programmer
    So i have a button drop down list that i want users to select a choice, and basically the button will return the users selection.Pretty straight forward, tested on jsFiddle and works great. Using ruby on rails btw, so i'm not sure if it might conflicting the way rails handle javascript actions. Heres the code: <%= form_for(@user, :html => {:class => 'form-horizontal'}) do |f| %> <fieldset> <p>Do you have experience in Business? If yes, select one of the following: <div class="input-group"> <div class="input-group-btn btn-group"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown">Select one <span class="caret"></span></button> <ul class="dropdown-menu"> <li ><a href="#">Entrepreneurship</a></li> <li ><a href="#">Investments</a></li> <li ><a href="#">Management</a></li> <li class="divider"></li> <li ><a href="#">All of the Above</a></li> </ul> </div><!-- /btn-group --> <%= f.text_field :years_business , :class => "form-control", :placeholder => "Years of experience" %> </div> Now there are 2 more of these, and basically what happens is that if I select an item for the first time from the dropdown list, everything works great. But the moment I select the same button/or new button, the page immediately kind of refreshes, they selected value will not show up after the list drops down and user selects a value. I viewed the page source and added additional javascript src and types, but still doesnt work. the jquery code: jQuery(function ($) { $('.input-group-btn .dropdown-menu > li:not(.divider)').click(function(){ $(this).closest('ul').prev().text($(this).text()) }) }); Any suggestions what is causing the problem?? The jsfiddle link is here: http://jsfiddle.net/w4s8u/7/

    Read the article

  • css issue on hover- shaky effect

    - by Sarika Thapaliya
    <style type="text/css"> .linkcontainer{border-right: solid 0.2px white;margin-right:1px} .hardlink{color: #FFF !important; border: 1px solid transparent; } .hardlink:hover{ background:url("/_layouts/images/bgximg.png") repeat-x -0px -489px; display:inline-block; background-color:#21374C; border:0.2px solid #5badff; line-height:20px; text-decoration:none !important;} </style> <div style="padding-bottom:3px;background:transparent; color:white!important; float:left; margin-right:20px; line-height:42px;"> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline">HROnline</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px; " href="http://hronline/ec">Employee Center</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px; " href="http://hronline/businesscommunities">Business Communities</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/internalservices">Internal Services</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/policiesprocedures">Policies&procedures</a> </span> <span class="linkcontainer"> <a class="hardlink" style="padding:0 10px;" href="http://hronline/qualitybestpractices">Best Practices</a> </span> </div> I added a right border to the span that contain menu links. When I hover on each menu links, it also has some background. This is causing jerky effect on the whole container.. What is causing the shaky effect on hover? I don't seem to figure it out--again..

    Read the article

< Previous Page | 838 839 840 841 842 843 844 845 846 847 848 849  | Next Page >