Search Results

Search found 10860 results on 435 pages for 'bad blocks'.

Page 283/435 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • are there any negative implications of sourcing a javascript file that does not actually exist?

    - by dreftymac
    If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this? Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path. Since no errors are displayed to the user, should this practice be considered a viable option?

    Read the article

  • Chipmunk warning still present with --release

    - by Kaliber64
    I'm using Python27 on Windows 7 64-bit. I downloaded the source for Chipmunk 6.2.x and compiled Pymunk with --release and -c ming32. Almost zero problems. Lots of path not found cause I'm bad. All prints seem to have disappeared but I get spammed with EPA iteration warnings. I've seen a couple discussions but no solutions. Possible chipmunk betas to fix the float errors causing the double truths causing the warning. I picked the latest stable version I think. My program is seriously bogged down with all the prints. class NullDevice(): def write(self, s): pass sys.stdout=NullDevice() Does not disable the C prints .< Any help?

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Voting UI for showing like/dislike community comments side-by-side

    - by Justin Grant
    We want to add a comments/reviews feature to our website's plugin gallery, so users can vote up/down a particular plugin and leave an optional short comment about what they liked or didn't like about it. I'm looking for inspiration, ideally a good implementation elsewhere on the web which isn't annoying to end users, isn't impossibly complex to develop, and which enables users to see both good and bad comments side-by-side, like this: Like: 57 votes Dislike: 8 votes --------------------------------- -------------------------------- "great plugin, saved me hours..." "hard to install" "works well on MacOS and Ubuntu" "Broken on Windows Vista with UAC enabled" "integrates well with version 3.2" More... More... Anyone know a site which does something like this?

    Read the article

  • How do I design a cryptographic hash function?

    - by Eyal
    After reading the following about why one-way hash functions are one-way, I would like to know how to design a hash function. http://stackoverflow.com/questions/1038307/help-me-better-understand-cryptographic-hash-functions/1047106#1047106 Before everyone gets on my case: Yes, I know that it's a bad idea to not use a proven and tested hash function. I would still like to know how it's done. I'm familiar with Feistel-network ciphers but those are necessarily reversible, horrible for a cryptographic hash. Is there some sort of construction that is well-used in cryptographic hashing? Something that makes it very one-way?

    Read the article

  • Display PDF in Html

    - by anil
    Hi, i want to show PDF in a view in MVC, following function return file public ActionResult TakeoffPlans(string projID) { Highmark.BLL.Models.Project proj = GetProject(projID); List ff = proj.GetFiles(Project_Thin.Folders.CompletedTakeoff, false); ViewData["HasFile"] = "0"; if (ff != null && ff.Count 0 && ff.Where(p = p.FileExtension == "pdf").Count() 0) { ViewData["HasFile"] = "1"; } ViewData["ProjectID"] = projID; ViewData["Folder"] = Project_Thin.Folders.CompletedTakeoff; //return View("UcRenderPDF"); string fileName = Server.MapPath("~/Content/Project List Update 2.pdf"); return File(fileName, "application/pdf", Server.HtmlEncode(fileName)); } but it display some bad data in view, please help me on this

    Read the article

  • How do you energize yourself when working alone on a project?

    - by Stephane
    I am working in an environment with a very small team (3 developers only) and each of us have been assigned a different project, without counting support tasks. I know this is a bad business practice and that we should all work on a single project at a time, and then move on to the next one (Already explained to the management on how much it sucks). So don't answer me that we should work all together on one project at a time. Energizing the work when in a team is mostly pair programming we did that when less project were thrown at us and that was great. What I would like to know is how you energize your work when working alone on a project. Do you follow any particular practice?

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Linq To Sql: Compiled Queries and Extension Methods

    - by Beni
    Hi community, I'm interessted, how does Linq2Sql handles a compiled query, that returns IQueryable. If I call an extension method based on a compiled query like "GetEntitiesCompiled().Count()" or "GetEntitiesCompiled().Take(x)". What does Linq2Sql do in the background? This would be very bad, so in this situation I should write a compiled query like "CountEntitiesCompiled". Does he load the result (in this case "GetEntitiesCompiled()") into the memory (mapped to the entity class like "ToList()")? So what situations make sense, when the compiled queries return IQueryable, that query is not able to modify, before request to the Sql-Server. So in my opinion I can just as good return List. Thanks for answers!

    Read the article

  • SIFR Newbie: font display problem in non-flash browsers

    - by bullquartz
    Hi, I'm wondering if someone can help me with this as I'm fairly new to siFR and think there is something essential i'm not comprehending in the documentation. I'm having success using siFR 3 (r436) to render fonts and it's working just how I want. However if I turn flash off in any browser the original (no longer rendered) html text displays very badly indeed. I thought that in any non-flash browser my inital stylesheet would be referred to and not siFR.css and I would be able to adjust the html text as a seperate entity. I think i probably developed this bad idea because I remember in earlier siFR versions you had to mess around alot with stylings on the original stylesheet + the sifr-config so you would get corresponding line heights/widths etc between the html and rendered font. (i realise that siFR 3 renders the flash in a different way) So it seems that siFR.css controls both the non-flash text and the rendered font. Anyway my essential noob questions is: how do i get the the original html text to have the same dimensions as the rendered font? thanks for you help

    Read the article

  • "Verbose Dictionary" in C#, 'override new' this[] or implement IDictionary

    - by Benjol
    All I want is a dictionary which tells me which key it couldn't find, rather than just saying The given key was not present in the dictionary. I briefly considered doing a subclass with override new this[TKey key], but felt it was a bit hacky, so I've gone with implementing the IDictionary interface, and passing everything through directly to an inner Dictionary, with the only additional logic being in the indexer: public TValue this[TKey key] { get { ThrowIfKeyNotFound(key); return _dic[key]; } set { ThrowIfKeyNotFound(key); _dic[key] = value; } } private void ThrowIfKeyNotFound(TKey key) { if(!_dic.ContainsKey(key)) throw new ArgumentOutOfRangeException("Can't find key [" + key + "] in dictionary"); } Is this the right/only way to go? Would newing over the this[] really be that bad?

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Of Datagridviews, databinding, and non validating cell values.

    - by Yanko Hernández Alvarez
    Lets simplify. Lets say I have this class: class T { public string Name { get; set; } public int Age { get; set; } public int height{ get; set; } ... } and I have a DataGridView's DataSource bound to a BindingList <T>, with N columns, each one bound to each property. I need to: Allow the user to enter non validating ages, heights, etc (for instance "aaa") Color the cells with non validating values (red background) Retain the non validating values displayed until the form is closed (I don't want to lose the values entered until the form is closed, so the user has the option to correct the bad cells anytime he wants BEFORE closing the form) Keep the last correct values entered for each cell with non validating values entered. When the form is closed, ditch the non validating values and keep the last correct values entered. Is there any easy way to do this?

    Read the article

  • In LaTeX prefer figures on text-heavy pages.

    - by bjarkef
    Hi LaTeX seems to have a preference for placing figures together on a page, and placing surrounding text on a separate page. Can I somehow change that balance a bit, as I prefer figures to break up the text to avoid too black text-heavy pages. Example: \section{Some section} [Half a page of text] \begin{figure} [...] \caption{Figure text 1} \end{figure} [Half a page of text] \begin{figure} [...] \caption{Figure text 2} \end{figure} [More text] So what LaTeX usually does is to stack the two half pages of text on a single page, and the figures on the following page. I believe this really gives a bad balance, and bores the reader. So can I change that somehow? I know about postfixing the \begin{figure} with [ht!], but often it does not really matter. I would like to configure the balancing algorithms in LaTeX to naturally prefer pages with combined figures and text.

    Read the article

  • sql server 2005 reporting services-- how to use multiple datasets in report

    - by larryq
    Hi everyone, I'm new to SQL Server reporting services, and am trying to decipher an existing report. It's nothing too bad, but I notice it does have two report datasets defined. (They are generated via separate stored procedures) I'm trying to figure out where and how the report datasets are linked together so the Fields collection has both sets of columns available and the report has a single rowset to traverse. Is there a section in the report layout where a joining of datasets is defined? I'm using Visual Studio 2005 to design and preview the report fwiw. Thanks for your help!

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • A web framework where AJAX was not an after thought

    - by Pirate for Profit
    AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right?

    Read the article

  • Freelancer's problem and legal actions

    - by user198003
    Hi, Last year, I worked as a free lancer on one project. Unfortunately, person I worked for, decided that he is not happy about my work, and he decided to fired me. After 7 months, he called me, and ask to me return "his" money. In any other case, he will sue me. His version is that I have to give him 1500 euros, but my version is that he own me another 1500. I have no contract, only emails and Excel file with counted hours. What do I have to do? I don't want to give him back 1500, because it was my work, and his bad management. Also, I do not want my 1500, because I think it's not fair from my side. What should I do?

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • Handling over-long UTF-8 sequences

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle over-long utf8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long utf8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • What is the main purpose and sense to have staging server the same as production?

    - by truthseeker
    Hi, In our company we have staging and production servers. I'm trying to have them in state 1:1 after latest release. We've got web application running on several host and many instances of it. The issue is that I am an advocate of having the same architecture (structure) of web applications on staging and production servers to easily test new features and avoid creating of new bugs with new releases. But not everyone agree with me, and for them is not a such big deal to have different connection between staging application instances. Even maybe to have more application and connections between application on staging than on production server. I would like to ask about pros and cons of such an approach? I mean some good points to agree with me, or some bad why maybe i don't have right. Some examples of consequences and so forth.

    Read the article

  • How do I use theme preprocessor functions for my own templates?

    - by Jergason
    I have several .tpl.php files for nodes, CCK fields, and Views theming. These template files have a lot of logic in them to move things around, strip links, create new links, etc. I understand that this is bad development and not "The Drupal Way". If I understand correctly, "The Drupal Way" is to use preprocessor functions in your template.php file to manipulate variables and add new variables. A few questions about that: Is there a naming convention for creating a preprocessor function for a specific theme? For example, if I have a CCK field template called content-field-field_transmission_make_model.tpl, how would I name the preprocessor function? Can I use template preprocessor functions for node templates, CCK field templates, and Views templates? Do they have different methods of modifying template variables or adding new ones?

    Read the article

  • Better way to clean this messy bool method

    - by Luís Custódio
    I'm reading Fowler Clean Code book and I think that my code is a little messy, I want some suggestions: I have a simple business requirement that is return the date of new execution of my Thread. I've two class fields: _hour and _day. If actual day is higher than my _day field I must return true, so I'll add a month to "executionDate" If the day is the same, but the actual hour is higher than _hour I should return true too. So I did this simple method: private bool ScheduledDateGreaterThanCurrentDate (DateTime dataAtual) { if (dateActual.Day > _day) { return true; } if (dateActual.Day == _day && dateActual.Hour > _hour) { return true; } if (dateActual.Day == _day && dateActual.Hour == _hour) if (dateActual.Minute>0 || dateActual.Second>0) return true; return false; } I'm programming with TDD, so I know that the return is correct, but this is bad maintain code right?

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >