Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 148/238 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Why is Chrome receiving data?

    - by Aero
    Chrome seems to be continually receiving data even though I'm not downloading anything. This is making a noticeable impact on my browsing speed. The first screenshot shows Chrome receiving data even though I'm not downloading anything (nor buffering a YouTube video etc.) Even after I completely close Google Chrome, the "chrome.exe" remains in the Resource Monitor list and the "Received bytes" column continually increases in the screenshot below. However, "chrome.exe" does not show up in the Processes tab of Task Manager. This only occurs sometimes, but I don't know why. I have tried running a malware/virus scans to ensure that there is nothing malicious behind this, but those scans have shown nothing. Any ideas on what's causing this?

    Read the article

  • IO redirect engine with metadata

    - by hawk.hsieh
    Is there any C library or tool to redirect IO and be able to configured by a metadata. And provide a dynamic link library to perform custom process for feeding data into next IO. For example, network video recorder: record video: socket do_something() file preview video: socket do_something() PCI device http service: download file: socket do_something(http) file socket post file: socket do_something(http) file serial control: monitor device: uart do_something(custom protocol) popen("zip") socket I know the unix-like OS has IO redirect feature and integrate all application you want. Even socket IO you can use /dev/tcp or implement a process to redirect to stdout. But this is process based , the process's foot print is big , IPC is heavy. Therefore, I am looking for something to redirect IO in a process and the data redirect between IO is configurable with a metadata (XML,jason or others).

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • What's the best way to apply a drop shadow?

    - by jckeyes
    What is the best method for applying drop shadows? I'm working on a site right now where we have a good deal of them, however, I've been fighting to find the best method to do it. The site is pretty animation heavy so shadows need to work well with this. I tried a jQuery shadow pulgin. The shadows looked good and were easy to use but were slow and didn't work well with any animations (required lots of redrawing, very joggy). I also tried creating my own jQuery extension that wraps my element in a couple gray divs and then offsets them a little bit to give a shadow effect. This worked well. It's quick and responsive to the animation. However, it makes DOM manipulation/traversal cumbersome since everything is wrapped in these shadow divs. I know there has to be a better way but this isn't exactly my forte. Thoughts?

    Read the article

  • Does it make a difference in performance if I use self.fooBar instead of fooBar?

    - by mystify
    Note: I know exactly what a property is. This question is about performance. Using self.fooBar for READ access seems a waste of time for me. Unnecessary Objective-C messaging is going on. The getters typically simply pass along the ivar, so as long as it's pretty sure there will be no reasonable getter method written, I think it's perfectly fine to bypass this heavy guy. Objective-C messaging is about 20 times slower than direct calls. So if there is some high-performance-high-frequency code with hundreds of properties in use, maybe it does help a lot to avoid unnessessary objective-c messaging? Or am I wasting my time thinking about this?

    Read the article

  • drupal: so many js and css files ?

    - by Patrick
    hi, I've realized I'm loading a lot of resources (24 css and 17 js files) using Drupal. I've several modules installed and they all come with a css and js file. For my website I'm only using 1 additional js plugin (all the other 16 come with Drupal modules). I've not installed useless modules. They are all necessary, and they require js such as swfobject, ajax_views, jquery.media, spamspan, lightbox (modal, video and default js files), etc Same thing with css files: ckeditor, filefield, lightbox, tagadelic, uploadfield, fieldgroup, vews, taxonomy_super_select, html-element, tabs, messages... etc For my website, I only use my theme css zen.css of course. So.. is this normal ? Or I should remove all this stuff? Are drupal websites normally heavy ? thanks

    Read the article

  • Leverage cloud and programming to share GB's of photos

    - by jcmoney
    My friends and I went on a trip and we have over 8 GB of photos we want to share. We live in different geographic locations and all of us (14 people) have a part of the 8 GB. I was wondering if there's a way to leverage my php skills to share all these photos. My current plan is to make a simple site that you can upload a bunch of files and also list those files for people to download (probably a compressed folder of a bunch of selected ones) but was wondering if there's a better way or if I'm grossly underestimating scalability issues. All of us have high speed internet (essentially T1) and I was planning on using Amazon EC2 since this is a heavy task but for a short time period. That's also the reason I can't use dropbox or similar services since they have a 2GB cap (and I don't want to have everyone sign up and install something). I also don't want to set up anything too tricky since not all of them are tech savvy.

    Read the article

  • What to beware of reading old Numarray tutorials and examples?

    - by DarenW
    Python currently uses Numpy for heavy duty math and image processing. The earlier Numeric and Numarray are obsolete, but still today there are many tutorials, notes, sample code and other documentation using them. Some of these cover special topics of interest, some are well written but haven't been updated or replaced, or are otherwise of use. Quite a bit is the same between Numeric, Numarray and Numpy, so I usually get good mileage out these older docs. Ocassionaly, though, I run into a line of code that results in error. Not often enough to remember how to get around it, but usually I figure it out at the cost of some time. What are the main things to watch out for when relying on such older documentation for current Numpy use? Is there a list of how to translate the differences that exist?

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • generating an asp.net web application dll requirement list

    - by Oren Mazor
    I'm trying to set up a web app (32bit on ii7/win7, 32bit setting is enabled, everything is compiled to x86, using vs2008), but there's clearly some dll module loading issue happening. I've been watching procmon and fusion logs but I'm not seeing the name of the missing dll. I'm a complete newbie to asp.net (but fairly heavy experience on other platforms). I know I can call depends.exe on a binary to see what the dependancies are, but how do I do it for asp.net? specifically, is it possible to get a list of the dlls that iis7 loads for my application?

    Read the article

  • Windows Phone 7 and C++/CLI

    - by Fabio Ceconello
    Microsoft recently released tools and documentation for its new Phone 7 platform, which to the dismay of those who have a big C++ codebase (like me) doesn't support native development anymore. Although I've found speculation about this decision being reversed, I doubt it. So I was thinking how viable would be to make this codebase available to Phone 7 by adapting it to compile under C++/CLI. Of course the user interface parts couldn't be ported, but I'm not sure about the rest. Anyone had a similar experience? I'm not talking about code that does heavy low-level stuff - but there's a quite frequent use of templates and smart pointers.

    Read the article

  • How to change enum definition without impacting clients using it in C#

    - by Rohit
    I have the following enum defined. I have used underscores as this enum is used in logging and i don't want to incur the overhead of reflection by using custom attribute.We use very heavy logging. Now requirement is to change "LoginFailed_InvalidAttempt1" to "LoginFailed Attempt1". If i change this enum, i will have to change its value across application. I can replace underscore by a space inside logging SP. Is there any way by which i can change this without affecting whole application.Please suggest. public enum ActionType { None, Created, Modified, Activated, Inactivated, Deleted, Login, Logout, ChangePassword, ResetPassword, InvalidPassword, LoginFailed_LockedAccount, LoginFailed_InActiveAccount, LoginFailed_ExpiredAccount, ForgotPassword, LoginFailed_LockedAccount_InvalidAttempts, LoginFailed_InvalidAttempt1, LoginFailed_InvalidAttempt2, LoginFailed_InvalidAttempt3, ForgotPassword_InvalidAttempt1, ForgotPassword_InvalidAttempt2, ForgotPassword_InvalidAttempt3, SessionTimeOut, ForgotPassword_LockedAccount, LockedAccount, ReLogin, ChangePassword_Due_To_Expiration, ChangePassword_AutoExpired }

    Read the article

  • How to create a formatted localized string?

    - by mystify
    I have a localized string which needs to take a few variables. However, in localization it is important that the order of the variables can change from language to language. So this is not a good idea: NSString *text = NSLocalizedString(@"My birthday is at %@ %@ in %@", nil); In some languages some words come before others, while in others it's reverse. I lack of an good example at the moment. How would I provide NAMED variables in a formatted string? Is there any way to do it without some heavy self-made string replacements? Even some numbered variables like {%@1}, {%@2}, and so on would be sufficient... is there a solution?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Including the functionality of a tool within another program?

    - by darren
    Hi there I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap. The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only? One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves. Thanks for any advice or suggestions.

    Read the article

  • How do you make Python wait so that you can read the output?

    - by anonnoir
    I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python. My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read. (My apologies if this question is a silly one, but I'm very new at Python and programming in general.)

    Read the article

  • MySQL with Java: Open connection only if possible

    - by emempe
    I'm running a database-heavy Java application on a cluster, using Connector/J 5.1.14. Therefore, I have up to 150 concurrent tasks accessing the same MySQL database. I get the following error: Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections This happens because the server can't handle so many connections. I can't change anything on the database server. So my question is: Can I check if a connection is possible BEFORE I actually connect to the database? Something like this (pseudo code): check database for open connection slots if (slot is free) { Connection cn = DriverManager.getConnection(url, username, password); } else { wait ... } Cheers

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • Issue about mapping MAC address to Ipv6 address

    - by deepsky
    I know that the address in ipv6 with prefix range 001 to 111 should use a 64-bit interface identifier that follows the EUI-64 format, which translates the MAC to ipv6 as below. MAC:00-02-b3-1e-83-29 --> 02-02-b3-ff-fe-1e-83-29 --->ipv6 addr: fe80::202:b3ff:fe1e:8329 Then I checked my network status with ipconfig /all on my windows XP, but it seems my ipv6 address doesn't follow the above rule: MAC:00-24-81-XX-XX-XX ipv6 addr:2001:da8:8006:225:0:24:81XX:XXXX Obviously it doesn't follow the EUI-64 format. Instead it just directly use the MAC as the last 8 bytes. Anyone know the reason? Pls Correct me if I am wrong.

    Read the article

  • Creating a pie chart for an app

    - by jhodgson4
    I'm developing an app which requires a pie chart to display a set number of modules. The modules need to be clickable, sending a value to the database for how many times the module has been clicked. The slices will change color etc depending on this database value. The slices will always be equal in size. All simple stuff. My question is what charting system would you use. I've been looking at google charts but I have no way of registering a value in a slice without changing its 'weight' in the chart. So ideally I would like to add data-stage="2" to each slice that I can access with a custom method. Also google charts seems quite heavy for what I need? Any advice would be greatly appreciated. Joe

    Read the article

  • How to program three editions Light, Pro, Ultimate in one solution

    - by Henry99
    I'd like to know how best to program three different editions of my C# ASP.NET 3.5 application in VS2008 Professional (which includes a web deployment project). I have a Light, Pro and Ultimate edition (or version) of my application. At the moment I've put all in one solution with three build versions in configuration manager and I use preprocessor directives all over the code (there are around 20 such constructs in some ten thousand lines of code, so it's overseeable): #if light //light code #endif #if pro //pro code #endif //etc... I've read in stackoverflow for hours and thought to encounter how e.g. Microsoft does this with its different Windows editions, but did not find what I expected. Somewhere there is a heavy discussion about if preprocessor directives are evil. What I like with those #if-directives is: the side-by-side code of differences, so I will understand the code for the different editions after six months and the special benefit to NOT give out compiled code of other versions to the customer. OK, long explication, repeated question: What's the best way to go?

    Read the article

  • How can I clear the appcache on the Google Chrome iPad app?

    - by Jannis
    I've written a little HTML5 based web app that I am trying to debug on the iPad using the Chrome for iPad app. I have added a cache.manifest file to my app which has some heavy caching in it of most static resources however since I am now wanting to debug the app I need a way to clear this cache. I know that on Chrome for Mac you can use: chrome://appcache-internals/ however this page does not exist in the iPad app of Chrome. The regular "Clear Browsing Data" does not empty the appcache —at least not in my case. Does anyone know how I can clear the appcache for the Chrome iPad app?

    Read the article

  • Sending a large number of mails causing problems on CentOS 6 / Plesk 10

    - by papakost
    I have a VPS running CentOS 6. When the system tries to send daily newsletter, after some time (e.g. after sending about 2000 emails), I get error "Unable to send mail" and the system memory goes really high. Till this moment, the mails are delivered normally. The rest symptoms are: I cannot see anything on /var/log/maillog (File seems not to be written) All files on /var/spool/mail have 0 bytes size. From time to time on httpd log I get errors like: /usr/sbin/sendmail: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 23 "Activate mail service on domain" setting in Plesk is deactivated. Any idea on what's going wrong here?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >