Search Results

Search found 9067 results on 363 pages for 'big fizzy'.

Page 290/363 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • View layout after setvisibility

    - by user1478296
    I've got problem with setting visibility to relative layout. I have part of big layout in relativelayout and below that next TextView. But in my code, when myRelativeLayout.setVisibility(View.GONE); is called, TextView which is below that did not appear. I tried several ways to rearange layout, but i need that textview under it. Thanks My XML: <merge xmlns:android="http://schemas.android.com/apk/res/android" > <ScrollView android:id="@+id/scrollView_liab_ra_flipper_04" android:layout_width="fill_parent" android:layout_height="wrap_content" > <RelativeLayout android:id="@+id/linearLayout_liab_ra_flipper_04" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <TextView android:id="@+id/someTextView" android:text="Something" /> <!-- This relative layout should be removable --> <RelativeLayout android:id="@+id/vg_liab_ra_04_flipper_car_container_visible" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@+id/someTextView" > <TextView android:id="@+id/tv_1" style="@style/WhiteFormText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="15dp" android:layout_marginTop="2dp" android:text="@string/licence_plate_counter" > </TextView> <EditText android:id="@+id/et_1" style="@style/WhiteFormField" android:layout_below="@+id/tv_1" android:hint="@string/licence_plate_hint" > </EditText> </RelativeLayout> <!-- This textview is not visible if relative layout is gone --> <TextView android:id="@+id/tv_liab_ra_04_flipper_mandat" style="@style/WhiteFormTextHint" android:layout_below="@+id/vg_liab_ra_04_flipper_car_container_visible" android:layout_marginBottom="15dp" android:text="@string/mandatory_field" > </TextView> </RelativeLayout> </ScrollView> </merge> Java Code: private void hideCar() { if (!accident.getParticipant(0)) { rlCarContainer.setVisibility(View.GONE); } else { rlCarContainer.setVisibility(View.VISIBLE); } }

    Read the article

  • Calculating distance from latitude, longitude and height using a geocentric co-ordinate system

    - by Sarge
    I've implemented this method in Javascript and I'm roughly 2.5% out and I'd like to understand why. My input data is an array of points represented as latitude, longitude and the height above the WGS84 ellipsoid. These points are taken from data collected from a wrist-mounted GPS device during a marathon race. My algorithm was to convert each point to cartesian geocentric co-ordinates and then compute the Euclidean distance (c.f Pythagoras). Cartesian geocentric is also known as Earth Centred Earth Fixed. i.e. it's an X, Y, Z co-ordinate system which rotates with the earth. My test data was the data from a marathon and so the distance should be very close to 42.26km. However, the distance comes to about 43.4km. I've tried various approaches and nothing changes the result by more than a metre. e.g. I replaced the height data with data from the NASA SRTM mission, I've set the height to zero, etc. Using Google, I found two points in the literature where lat, lon, height had been transformed and my transformation algorithm is matching. What could explain this? Am I expecting too much from Javascript's double representation? (The X, Y, Z numbers are very big but the differences between two points is very small). My alternative is to move to computing the geodesic across the WGS84 ellipsoid using Vincenty's algorithm (or similar) and then calculating the Euclidean distance with the two heights but this seems inaccurate. Thanks in advance for your help!

    Read the article

  • Total row count for pagination using JPA Criteria API

    - by ThinkFloyd
    I am implementing "Advanced Search" kind of functionality for an Entity in my system such that user can search that entity using multiple conditions(eq,ne,gt,lt,like etc) on attributes of this entity. I am using JPA's Criteria API to dynamically generate the Criteria query and then using setFirstResult() & setMaxResults() to support pagination. All was fine till this point but now I want to show total number of results on results grid but I did not see a straight forward way to get total count of Criteria query. This is how my code looks like: CriteriaBuilder builder = em.getCriteriaBuilder(); CriteriaQuery<Brand> cQuery = builder.createQuery(Brand.class); Root<Brand> from = cQuery.from(Brand.class); CriteriaQuery<Brand> select = cQuery.select(from); . . //Created many predicates and added to **Predicate[] pArray** . . select.where(pArray); // Added orderBy clause TypedQuery typedQuery = em.createQuery(select); typedQuery.setFirstResult(startIndex); typedQuery.setMaxResults(pageSize); List resultList = typedQuery.getResultList(); My result set could be big so I don't want to load my entities for count query, so tell me efficient way to get total count like rowCount() method on Criteria (I think its there in Hibernate's Criteria).

    Read the article

  • Spring aop multiple pointcuts & advice but only the last one is working

    - by Jarle Hansen
    I have created two Spring AOP pointcuts that are completely separate and will be woven in for different parts of the system. The pointcuts are used in two different around advices, these around-advices will point to the same Java method. How the xml-file looks: <aop:config> <aop:pointcut expression="execution(......)" id="pointcutOne" /> <aop:pointcut expression="execution(.....)" id="pointcurTwo" /> <aop:aspect id="..." ref="springBean"> <aop:around pointcut-ref="pointcutOne" method="commonMethod" /> <aop:aroung pointcut-ref="pointcutTwo" method="commonMethod" /> </aop:aspect> </aop:config> The problem is that only the last pointcut works (if I change the order "pointcutOne" works because it is last). I have gotten it to work by creating one big pointcut, but I would like to have them separate. Any suggestions to why only one of the pointcuts works at a time?

    Read the article

  • Handling selection and deselection of objects in a user interface

    - by Dennis
    I'm writing a small audio application (in Silverlight, but that's not really relevant I suppose), and I'm a struggling with a problem I've faced before and never solved properly. This time I want to get it right. In the application there is one Arrangement control, which contains several Track controls, and every Track can contain AudioObject controls (these are all custom user controls). The user needs to be able to select audio objects, and when these objects are selected they are rendered differently. I can make this happen by hooking into the MouseDown event of the AudioObject control and setting state accordingly. So far so good, but when an audio object is selected, all other audio objects need to be deselected (well, unless the user holds the shift key). Audio objects don't know about other audio objects though, so they have no way to tell the rest to deselect themselves. Now if I would approach this problem like I did the last time I would pass a reference to the Arrangement control in the constructor of the AudioObject control and give the Arrangement control a DeselectAll() method or something like that, which would tell all Track controls to deselect all their AudioObject controls. This feels wrong, and if I apply this strategy to similar issues I'm afraid I will soon end up with every object having a reference to every other object, creating one big tightly coupled mess. It feels like opening the floodgates for poorly designed code. Is there a better way to handle this?

    Read the article

  • Modularizing web applications

    - by Matt
    Hey all, I was wondering how big companies tend to modularize components on their page. Facebook is a good example: There's a team working on Search that has its own CSS, javascript, html, etc.. There's a team working on the news feed that has its own CSS, javascript, html, etc... ... And the list goes on They cannot all be aware of what everyone is naming their div tags and whatnot, so what's the controller(?) doing to hook all these components in on the final page?? Note: This doesn't just apply to facebook - any company that has separate teams working on separate components has some logic that helps them out. EDIT: Thanks all for the responses, unfortunately I still haven't really found what I'm looking for - when you check out the source code (granted its minified), the divs have UIDs, my guess is that there is a compilation process that runs through and makes each of the components unique, renaming divs and css rules.. any ideas? EDIT 2: Thanks all for contributing your thoughts - the bounty went to the highest upvoted answer. The question was designed to be vague- I think it led to a really interesting discussion. As I improve my build process, I will contribute my own thoughts and experiences. Thanks all! Matt Mueller

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • How many custom tabs can I add to a Facebook page?

    - by Maxi Ferreira
    I'm building a web application to create custom tabs and add them to the user's Facebook fanpages. I know how the process of "installing" FB apps into FB pages so they show up as Page Tabs works, but the problem is the client wants to allow the user to create unlimited Page Tabs for a single FB Page. So I have basically two questions. 1 - Can I resue a single FB App to be included into the same page several times? If so, is there a way to know what is the "id" of that Page Tab? So, if I have my FB App to look for the Tab content in http://www.mywebapp.com/tab/, I know I get a signed_request with the App ID and the Page ID, but if that same App is installed several times into the same Page, I don't know what's the Tab the user have cliked on. I know it's a little big messy, and I don't think there's a way to do this. So my next question is probably more accurate. 2 - Is there a limit on how many Tabs can I add to a single Facebook page? This way, if there's a limit of, say, 12 Tabs, I can create 12 FB Tab Apps, store the ID's and then I know which Tab of which Page the user is currently viewing. Thanks in advice! Maxi

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Git merge 2 new file with removed content and added content

    - by Loïc Faure-Lacroix
    So we are working in with 2 different repositories and both designers modified the same file. the problem is quite simple but I have no ideas how to solve it yet. Both files are marked as new since they have almost nothing in common except that file. When I try to merge from branch A to B it mark the parts added in A deleted in B and on the other side, what was added in B appears deleted in A. git seems to try to outsmart me when I know that I need almost every changes and nothing should be mark as deletion. I have 2 other branch that should merge without problem after these 2 branch. I can't merge them yet since there are some recent changes that may not merge really well too. I have to merge A and B = E then C and D = F and then hopefully E and F So the big question here is how can I do a completely manual merge that will mark every changes as conflict anything deleted anything added should be marked as conflict that I can solve by myself using an editor. Git is trying to outsmart me and fail terribly at it.

    Read the article

  • injection attack (I thought I was protected!) <?php /**/eval(base64_decode( everywhere

    - by Cyprus106
    I've got a fully custom PHP site with a lot of database calls. I just got injection hacked. This little chunk of code below showed up in dozens of my PHP pages. <?php /**/ eval(base64_decode(big string of code.... I've been pretty careful about my SQL calls and such; they're all in this format: $query = sprintf("UPDATE Sales SET `Shipped`='1', `Tracking_Number`='%s' WHERE ID='%s' LIMIT 1 ;", mysql_real_escape_string($trackNo), mysql_real_escape_string($id)); $result = mysql_query($query); mysql_close(); For the record, I rarely use mysql_close() at the end though. That just happened to be the code I grabbed. I can't think of any places where I don't use mysql_real_escape_string(), (although I'm sure there's probably a couple. I'll be grepping soon to find out) There's also no places where users can put in custom HTML or anything. In fact, most of the user-accessible pages, if they use SQL calls at all, are almost inevitably "SELECT * FROM" pages that use a GET or POST, depending. Obviously I need to beef up my security, but I've never had an attack like this and I'm not positive what I should do. I've decided to put limits on all my inputs and go through looking to see if i missed a mysql_real_escape_string somewhere... Anybody else have any suggestions? Also... what does this type of code do? Why is it there?

    Read the article

  • Building a custom CMS, how to handle page settings?

    - by davidosomething
    Not a coldfusion specific question so answer however you can. I've inherited a ColdFusion project where at the top of every page various page-setting specific variables are set, such as: <cfset request.page.title = "Example Page"> <cfset request.page.machineTitle = "example_page"> <cfset request.page.isJQueryEnabled = 1> <cfset request.page.showNavigation = 1> <cfset request.page.SWFObjectVersion = 2.2> I'm thinking about creating a database table with just integer page_id varchar key varchar value I'd reduce the variables at the top of every page to just the page id, and then call the DB for the correct settings. Is this a good idea? I hate reinventing the wheel, but this is a really big project that would require many months for a full content migration to a CMS. What are the current practices for storing page settings? (e.g., what does WordPress do? Drupal? etc.)

    Read the article

  • Non-blocking TCP connection issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • Am I making the right choice in choosing Yii as my PHP Framework?

    - by Bara
    I am about to begin development of a new website and have been doing research on PHP Frameworks. I'm not an advanced PHP developer, but I have been developing web sites and apps (in asp.net) for a few years now. My website will primarily be AJAX-based (using jQuery) and making lots of calls to web services. After some research, here's what I came up with: CakePHP: Originally started developing in this, but found it too complex. The fact that it forces you to use and learn all this new stuff just to use it was a bit daunting, so I put it aside for the time being. Zend: The performance of the framework leaves me a bit skeptical, but I heard it has great support for creating web services. I also heard it was a bit complex. CodeIgniter: No real reason for not using this one. Based on what I've read CodeIgniter and Yii are very similar, but Yii is a bit faster and doesn't have un-needed code for PHP4 (since I plan on developing exclusively in PHP5). As far as Yii, the only things that scare me about it are that it is newer than the other frameworks so it has a smaller community. It also doesn't seem to have a ton of web service support (only SOAP, from my understanding) as opposed to Zend. So my questions come down to: Should these things worry me? (not as big of a community, poor web service support) Is there anything else I should look into? Is my choice of Yii over the other frameworks ok for a primarily AJAX-based web app? Bara

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Delphi: Error when starting MCI

    - by marco92w
    I use the TMediaPlayer component for playing music. It works fine with most of my tracks. But it doesn't work with some tracks. When I want to play them, the following error message is shown: Which is German but roughly means that: In the project pMusicPlayer.exe an exception of the class EMCIDeviceError occurred. Message: "Error when starting MCI.". Process was stopped. Continue with "Single Command/Statement" or "Start". The program quits directly after calling the procedure "Play" of TMediaPlayer. This error occurred with the following file for example: file size: 7.40 MB duration: 4:02 minutes bitrate: 256 kBit/s I've encoded this file with a bitrate of 128 kBit/s and thus a file size of 3.70 MB: It works fine! What's wrong with the first file? Windows Media Player or other programs can play it without any problems. Is it possible that Delphi's TMediaPlayer cannot handle big files (e.g. 5 MB) or files with a high bitrate (e.g. 128 kBit/s)? What can I do to solve the problem?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • SWT: scrollable area within a tab

    - by DaveJohnston
    I am trying to add a scrollable area to my tabbed window. So far I have a CTabFolder in a shell. I have added 5 CTabItems to it and everything works as expected. On one of my CTabItems the contents are too big to fit on the screen so I would like to be able to scroll. The contents is a collection of Groups each containing various widgets. So the CTabFolder is created as follows: CTabFolder tabs = new CTabFolder(shell, SWT.BORDER); tabs.setSimple(false); tabs.setUnselectedImageVisible(false); tabs.setUnselectedCloseVisible(false); tabs.setMinimizeVisible(false); tabs.setMaximizeVisible(false); FormData tabsLayoutData = new FormData(); tabsLayoutData.top = new FormAttachment(0, 5); tabsLayoutData.left = new FormAttachment(0, 5); tabsLayoutData.bottom = new FormAttachment(92, 0); tabsLayoutData.right = new FormAttachment(100, -5); tabs.setLayoutData(tabsLayoutData); Then the CTabItem: CTabItem tab = new CTabItem(tabs, SWT.NONE); tab.setText("Role"); Then the contents: Composite tabArea = new Composite(tabs, SWT.V_SCROLL); tabArea.setLayout(new FormLayout()); tab.setControl(tabArea); So the groups contained within the tab are created with tabArea as the parent and everything appears as you would expect. The problem is though that the vertical scroll bar is always present but doesn't seem to do anything. The contents are chopped off at the bottom of the tabArea composite. Is there anything else I need to do to get the scrolling to work properly?

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >