Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 218/1320 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Avoiding 'Buffer Overrun' C6386 warning

    - by bdhar
    In my code, I am using an array xyz of 10 objects. When I am trying to access an element of the array using an unsigned int index like this: xyz[level], I get 'Buffer overrun' warning. Logically, I am pretty sure that level won't exceed 10. How to avoid this warning?

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • m2eclipse sets JDK compliance to 1.4

    - by jihedamine
    Using eclipse 3.5, when I create a new maven project, m2eclipse automatically adds J2SE1.4 to libraries and Compiler Compliance Level to 1.4 (Project properties Java Compiler). My JRE system library is 1.6 and my default compiler compliance level is 1.6. I don't even have 1.4 installed. Can I make m2eclipse use my default settings and prevent it from modifying project settings?

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • How to select number of lines from large text files?

    - by MiNdFrEaK
    I was wondering how to select number of lines from a certain text file. As an example: I have a text file containing the following lines: branch 27 : rect id 23400 rect: -115.475609 -115.474907 31.393650 31.411301 branch 28 : rect id 23398 rect: -115.474907 -115.472282 31.411301 31.417351 branch 29 : rect id 23396 rect: -115.472282 -115.468033 31.417351 31.427151 branch 30 : rect id 23394 rect: -115.468033 -115.458733 31.427151 31.438181 Non-Leaf Node: level=1 count=31 address=53 branch 0 : rect id 42 rect: -115.768539 -106.251556 31.425039 31.717550 branch 1 : rect id 50 rect: -109.559479 -106.009361 31.296721 31.775299 branch 2 : rect id 51 rect: -110.937401 -106.226143 31.285870 31.771971 branch 3 : rect id 54 rect: -109.584412 -106.069092 31.285240 31.775230 branch 4 : rect id 56 rect: -109.570961 -106.000954 31.296721 31.780769 branch 5 : rect id 58 rect: -115.806213 -106.366188 31.400450 31.687519 branch 6 : rect id 59 rect: -113.173859 -106.244057 31.297440 31.627750 branch 7 : rect id 60 rect: -115.811478 -106.278252 31.400450 31.679470 branch 8 : rect id 61 rect: -109.953888 -106.020111 31.325319 31.775270 branch 9 : rect id 64 rect: -113.070969 -106.015968 31.331841 31.704750 branch 10 : rect id 68 rect: -113.065689 -107.034576 31.326300 31.770809 branch 11 : rect id 71 rect: -112.333344 -106.059860 31.284081 31.662920 branch 12 : rect id 73 rect: -115.071083 -106.309677 31.267879 31.466850 branch 13 : rect id 74 rect: -116.094414 -106.286308 31.236290 31.424770 branch 14 : rect id 75 rect: -115.423264 -106.286308 31.229691 31.415510 branch 15 : rect id 76 rect: -116.111656 -106.313110 31.259390 31.478300 branch 16 : rect id 77 rect: -116.247467 -106.309677 31.240231 31.451799 branch 17 : rect id 78 rect: -116.170792 -106.094543 31.156429 31.391781 branch 18 : rect id 79 rect: -116.225723 -106.292709 31.239960 31.442850 branch 19 : rect id 80 rect: -116.268013 -105.769913 31.157240 31.378111 branch 20 : rect id 82 rect: -116.215424 -105.827202 31.198441 31.383421 branch 21 : rect id 83 rect: -116.095734 -105.826439 31.197460 31.373819 branch 22 : rect id 84 rect: -115.423264 -105.815018 31.182640 31.368891 branch 23 : rect id 85 rect: -116.221527 -105.776512 31.160931 31.389830 branch 24 : rect id 86 rect: -116.203369 -106.473831 31.168350 31.367611 branch 25 : rect id 87 rect: -115.727631 -106.501587 31.189100 31.395941 branch 26 : rect id 88 rect: -116.237289 -105.790756 31.164780 31.358959 branch 27 : rect id 89 rect: -115.791344 -105.990044 31.072620 31.349529 branch 28 : rect id 90 rect: -115.736847 -106.495079 31.187969 31.376900 branch 29 : rect id 91 rect: -115.721710 -106.000130 31.160351 31.354601 branch 30 : rect id 92 rect: -115.792236 -106.000793 31.166620 31.378811 Leaf Node: level=0 count=21 address=42 branch 0 : rect id 18312 rect: -106.412270 -106.401367 31.704750 31.717550 branch 1 : rect id 18288 rect: -106.278252 -106.253387 31.520321 31.548361 I just want those lines which are in between Non-Leaf Node level=1 to Leaf Node Level=0 and also there are a lot of segments like this and I need them all.

    Read the article

  • How can I tell if the screen is on in android?

    - by user297020
    In Android 2.2 (Level 7) the function PowerManager.IsScreenOn() returns a boolean that is true if the screen is turned on and false if the screen is turned off. I am developing code for Android 1.5 (Level 3). How do I accomplish the same task in older versions of Android? I do not want to turn the screen on or off in my code. I just want to know what it is.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Can you stop a defered callback in jquery 1.5?

    - by chobo2
    Hi I am wondering say you have something like this // Assign handlers immediately after making the request, // and remember the jqxhr object for this request var jqxhr = $.ajax({ url: "example.php" }) .success(function(response) { alert("success"); }) // perform other work here ... // Set another success function for the request above jqxhr.success(function(response){ alert("second success"); }); So I am thinking this. I have a general function that I want to use on all my responses that would be passed into my success. This function basically does a check to see if the server validation found any errors. If it did they it formats it and displays a message. Now I am wondering if I could some how have the second success function to then do specific stuff. Like say One ajax request needs to add a row into a table. So this should be possible. I just do what I have above and in the second success I just add the row. Is it possible though that if the first success runs through and see that there are validation errors from the server that I can stop the second success from happening? Sort of If(first success finds errors) { // print out errors // don't continue onto next success } else { // go to next success } Edit I found that there is something call deferred.reject and this does stop it but I am wondering how can I specify to stop only the success one. Since my thinking is if there are other deffered ones like complete on it will the be rejected too?

    Read the article

  • android SQLite vs Flat Files

    - by mixm
    hi. im creating a game right now and im a bit stuck on how to implement storage of levels. i need to be able to download level files from the internet ota. im not so familiar with transferring files ota, but i have some experience with databases (mysql). what would be a better way of storing the game's level data?

    Read the article

  • Pointing class property to another class with vectors

    - by jmclem
    I've got a simple class, and another class that has a property that points to the first class: #include <iostream> #include <vector> using namespace std; class first{ public: int var1; }; class second{ public: first* classvar; }; Then, i've got a void that's supposed to point "classvar" to the intended iteration of the class "first". void fill(vector<second>& sec, vector<first>& fir){ sec[0].classvar = &fir[0]; } Finally the main(). Create and fill a vector of class "first", create "second" vector, and run the fill function. int main(){ vector<first> a(1); a[0].var1 = 1000; vector<second> b(1); fill(b, a); cout << b[0].classvar.var1 << '\n'; system("PAUSE"); return 0; } This gives me the following error: 1>c:\...\main.cpp(29) : error C2228: left of '.var1' must have class/struct/union 1> type is 'first *' And I can't figure out why it reads the "classvar" as the whole vector instead of just the single instance. Should I do this cout << b[0].classvar[0].var1 << '\n'; it reads perfectly. Can anyone figure out the problem? Thanks in advance

    Read the article

  • I got an error when implementing tde in sql2008

    - by mahima
    while using USE mssqltips_tde; CREATE DATABASE ENCRYPTION KEY with ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE TDECert GO getting error Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'KEY'. Msg 319, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon. please help in resolving the same as i need to implement Encryption on my DB

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • How to play simultaneous multiply audio sources in Silverlight

    - by Shurup
    I want to play simultaneous multiply audio sources in Silverlight. So I've created a prototype in Silverlight 4 that should play a two mp3 files containing the same ticks sound with an intervall 1 second. So these files must be sounded as one sound if they will be played together with any whole second offsets (0 and 1, 0 and 2, 1 and 1 seconds, etc.) I my prototype I use two MediaElement (me and me2) objects. DateTime startTime; private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); var timer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(1) }; timer.Tick += RefreshData; timer.Start(); } First file should be played at 00:00 sec. and the second in 00:02 second. void RefreshData(object sender, EventArgs e) { if(me.CurrentState != MediaElementState.Playing) { startTime = DateTime.Now; me.Play(); return; } var elapsed = DateTime.Now - startTime; if(me2.CurrentState != MediaElementState.Playing && elapsed >= TimeSpan.FromSeconds(2)) { me2.Play(); ((DispatcherTimer)sender).Stop(); } } The tracks played every time different and not simultaneous as they should (as one sound).

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • Making two Windows using CreateWindowsEx()

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • Validating parameters according to a fixed reference

    - by James P.
    The following method is for setting the transfer type of an FTP connection. Basically, I'd like to validate the character input (see comments). Is this going overboard? Is there a more elegant approach? How do you approach parameter validation in general? Any comments are welcome. public void setTransferType(Character typeCharacter, Character optionalSecondCharacter) throws NumberFormatException, IOException { // http://www.nsftools.com/tips/RawFTP.htm#TYPE // Syntax: TYPE type-character [second-type-character] // // Sets the type of file to be transferred. type-character can be any // of: // // * A - ASCII text // * E - EBCDIC text // * I - image (binary data) // * L - local format // // For A and E, the second-type-character specifies how the text should // be interpreted. It can be: // // * N - Non-print (not destined for printing). This is the default if // second-type-character is omitted. // * T - Telnet format control (<CR>, <FF>, etc.) // * C - ASA Carriage Control // // For L, the second-type-character specifies the number of bits per // byte on the local system, and may not be omitted. final Set<Character> acceptedTypeCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'A','E','I','L'} )); final Set<Character> acceptedOptionalSecondCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'N','T','C'} )); if( acceptedTypeCharacters.contains(typeCharacter) ) { if( new Character('A').equals( typeCharacter ) || new Character('E').equals( typeCharacter ) ){ if( acceptedOptionalSecondCharacters.contains(optionalSecondCharacter) ) { executeCommand("TYPE " + typeCharacter + " " + optionalSecondCharacter ); } } else { executeCommand("TYPE " + typeCharacter ); } } }

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Best usb storage for my router, Asus RT-AC66U?

    - by Jason94
    I have the ASUS RT-AC66U and I want to add a USB storage to it. It has 2x USB, and Im already using one for my printer. So the last one I want to use to attach a USB storage, and I've read some reviews stating the throughput of the USB could be up to 18 mb/s. So in regard of USB storage, should I care about hard disk cache? Simple powered-over-usb seems to have 8 mb cache, other (externally powered) has 16 for instance.

    Read the article

  • Has anyone tried the "objectify" library for Google App Engine?

    - by Spines
    I was using JDO for my google app engine project but got fed up with the additional 5 seconds it adds to my cold start time. I was planning on just writing stuff directly to the database with the low level datastore api, but then I came accross the objectify project ( http://code.google.com/p/objectify-appengine/ ). Apparently its a super light wrapper above the low level api. Does anyone have experiences with this library that they could share?

    Read the article

  • How can i implement the NULL Object Design Pattern in a generic form?

    - by Colour Blend
    Is there a way to implement the null object design pattern in a generic form so that i don't need to implement it for every buisness object. For me, there are two high level classes you'll need for every business class. One for a single record and another for a list. So i think there should be a way to implement the NULL Object design pattern at a high level and not have to implement it for every class. Is there a way and how please?

    Read the article

  • How to set a EditText in a certain column of a TableLayout?

    - by Nick
    I have a TableLayout on one Android Activity UI. It has two columns. Now I need to add a new row, and put an EditText box in second column of that new row. And also, I want that EditText full fill the whole cell. I have some code like this: TableRow tr = new TableRow(context); EditText et = new EditText(context); et.SetMaxLines(4); etText.setLayoutParams(new TableRow.LayoutParams(1)); //set it to the second coloumn tr.addView(et); tl.addView(tr); //tl is the tableLayout It puts the EditText in the second column fine, but the EditText is too small. I tried to use etText.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); but that seems to disabled the TableRow.LayoutParams setting. I guess each control can only have one LayoutParamas setting. So, how to make the EditText as a 4 lines text editor and also make sure it is in the second column of that row? Thanks.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >