Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 721/1072 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • Access Log Files

    - by Matt Watson
    Some of the simplest things in life make all the difference. For a software developer who is trying to solve an application problem, being able to access log files, windows event viewer, and other details is priceless. But ironically enough, most developers aren't even given access to them. Developers have to escalate the issue to their manager or a system admin to retrieve the needed information. Some companies create workarounds to solve the problem or use third party solutions.Home grown solution to access log filesSome companies roll their own solution to try and solve the problem. These solutions can be great but are not always real time, and don't account for the windows event viewer, config files, server health, and other information that is needed to fix bugs.VPN or FTP access to log file foldersCreate programs to collect log files and move them to a centralized serverModify code to write log files to a centralized placeExpensive solution to access log filesSome companies buy expensive solutions like Splunk or other log management tools. But in a lot of cases that is overkill when all the developers need is the ability to just look at log files, not do analytics on them.There has to be a better solution to access log filesStackify recently came up with a perfect solution to the problem. Their software gives developers remote visibility to all the production servers without allowing them to remote desktop in to the machines. They can get real time access to log files, windows event viewer, config files, and other things that developers need. This allows the entire development team to be more involved in the process of solving application defects.Check out their product to learn morehttp://www.Stackify.com

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Using XML in a Flex Website to Improve SEO

    - by Laxmidi
    Hi, I've got a Flex 3 site called www.brainpinata.com that's a trivia game. Basically, everything in the site is pulled from a database-- the questions, choices, and answers. So, unfortunately, Google doesn't index my content. So, I'm trying to think of ways to improve the situation: A) If I took my database data and put it in an XML file which was in the website's root directory, would this work? Would it violate any Google policy? (The info would be the same as in the db-- so nothing shady.) Would I have to "wire" the XML into my site or would it be enough to just have the XML sitting in the root directory? B) Another idea is to use the noscript tag and load the XML content there. As I understand it Google indexes content that people who have Javascript turned off would see. I know Flex/Actionscript 3, and unfortunately, I don't know how to load XML content with HTML. Does anyone know of an example where a Flex site uses XML for the noscript content? Thank you. -Laxmidi

    Read the article

  • Remote reboot over ssh does not restart

    - by Finn Årup Nielsen
    I would like to remotely reboot my Ubuntu 12.04 LTS server via ssh. I do sudo reboot and I loose connection and the server connection does not reappear. It does not ping. When I go the the physical computer with a screen attached I see a black screen and hear that the server is still on. I do a hard power off (press power on button for a few seconds) and the server halts. After I press power on the server boots with no problem. As far as I remember the remote reboot has previously worked on that server. I wonder if sudo reboot & will help? I suppose I could also try sudo shutdown -r and see if that does any difference. I have listed an excerpt of /etc/log/syslog below. The last thing it records is the stopping of the logging. Oct 24 10:14:49 servername kernel: [1354427.594709] init: cron main process (1060) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.594908] init: irqbalance main process (1080) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.595299] init: tty1 main process (1424) killed by TERM signal Oct 24 10:14:49 servername kernel: [1354427.637747] init: plymouth-upstart-bridge main process (20873) terminated with status 1 Oct 24 10:14:49 servername kernel: Kernel logging (proc) stopped. Oct 24 10:14:49 servername rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="876" x-info="http://www.rsyslog.com"] exiting on signal 15. Oct 24 10:25:34 servername kernel: imklog 5.8.6, log source = /proc/kmsg started. Oct 24 10:25:34 servername rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="862" x-info="http://www.rsyslog.com"] start

    Read the article

  • Solaris Tech Day mit Engineering 3.12. Frankfurt

    - by Franz Haberhauer
    Am Dienstag, den 3. Dezember 2013 haben wir den Chef des Solaris Engineering Markus Flierl mit einigen seiner Engineers und Joost Pronk vom Produkt Management zu Gast in unserer Geschäftstelle in Dreieich (Frankfurt). Wir nutzen diese Gelegenheit, Ihnen bei einem Solaris Tech Day direkt von der Quelle tiefe Einblicke in Solaris-Technologien zu geben: Agenda Time Session Speaker 09:00 Registration and Breakfast 09:45 Oracle Solaris - Strategy, Engineering Insights, Roadmap, and a Glimpse on Solaris in Oracle's IT Markus Flierl 11:15 Coffee 11:35 Oracle Solaris 11.1: The Best Platform for Oracle - The Technologies Behind the Scenes Bart Smaalders 12:35 Lunch 13:25 Solaris Security: Reduce Risk , Deliver Secure Services, and Monitor Compliance Darren Moffat 14:10 Solaris 11 Provisioning and SMF - Insights from the Lead Engineers Bart Smaalders & Liane Praza 14:55 Solaris Data Management - ZFS, NFS, dNFS, ASM, and OISP Integration with the Oracle DB Darren Moffat 15:25 Coffee 15:45 Solaris 10 Patches and Solaris SRUs - News and Best Practices Gerry Haskins 16:30 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris Joost Pronk 17:00 Q&A panel - All presenters and Solaris engineers Bitte registrieren Sie sich hier, um sich einen Platz bei dieser außergewöhnlichen Veranstaltung zu sichern. Es lohnt sich übrigens auch mal in die Blogs von  Markus Flierl mit einem interessanten Beitrag zu Eindrücken und Ausblicken von der Oracle Open World 2013 oder den von  Darren Moffat zu schauen. Gerry Haskins schreibt als Director Solaris Lifecycle Engineering gleich in zwei Blogs - der Patch Corner mit Schwerpunkt Solaris 10 und dem Solaris 11 Maintenance Lifecycle. Bereits in der kommenden Woche findet in Nürnberg die DOAG 2013 Konferenz und Ausstellung mit einem breiten Spektrum an Vorträgen rund um Solaris statt - insbesondere auch mit vielen Erfahrungsberichten aus der Praxis.

    Read the article

  • What is the structure of network managers system-connections files?

    - by Oyks Livede
    could anyone list the complete structure of the configuration files, which network manager stores for known networks in /etc/NetworkManager/system-connections for known networks? Sample (filename askUbuntu): [connection] id=askUbuntu uuid=81255b2e-bdf1-4bdb-b6f5-b94ef16550cd type=802-11-wireless [802-11-wireless] ssid=askUbuntu mode=infrastructure mac-address=00:08:CA:E6:76:D8 [ipv6] method=auto [ipv4] method=auto I would like to create some of them by my own using a script. However, before doing so I would like to know every possible option. Furthermore, this structure seems somehow to resemble the information you can get using the dbus for active connections. dbus-send --system --print-reply \ --dest=org.freedesktop.NetworkManager \ "$active_setting_path" \ # /org/freedesktop/NetworkManager/Settings/2 org.freedesktop.NetworkManager.Settings.Connection.GetSettings Will tell you: array [ dict entry( string "802-11-wireless" array [ dict entry( string "ssid" variant array of bytes "askUbuntu" ) dict entry( string "mode" variant string "infrastructure" ) dict entry( string "mac-address" variant array of bytes [ 00 08 ca e6 76 d8 ] ) dict entry( string "seen-bssids" variant array [ string "02:1A:11:F8:C5:64" string "02:1A:11:FD:1F:EA" ] ) ] ) dict entry( string "connection" array [ dict entry( string "id" variant string "askUbuntu" ) dict entry( string "uuid" variant string "81255b2e-bdf1-4bdb-b6f5-b94ef16550cd" ) dict entry( string "timestamp" variant uint64 1383146668 ) dict entry( string "type" variant string "802-11-wireless" ) ] ) dict entry( string "ipv4" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) dict entry( string "ipv6" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) ] I can create new setting files using the dbus (AddSettings() in /org/freedesktop/NetworkManager/Settings) passing this type of input, so explaining me this structure and telling me all possible options will also help. Afaik, this is a Dictionary{String, Dictionary{String, Variant}}. Will there be any difference creating config files directly or using the dbus?

    Read the article

  • Record management system java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced? What was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Which tools you use for development in your company?…Please be exact [closed]

    - by predrag.music
    If you are a professional php/(my/postgre/?)sql/? developer and working in a professional team ... I would like to know which tools you use for development in your company. I do not care which tool is better or worse, but "which tools you use", if it is not a TOP SECRET :) For example, these are just some of the tools i/we use (first those used most (in general)): Pen, paper lots of cofee, cola ... let me think ... mmmm ... yeah more cofee :) All kinds of books (a lots of books) OS: Win / MacOS X Server: Hosted (CentOS )/ At work Mac OS X Dev server: XAMPP / MAMP / LAMP Editor: Notepad++ IDE: Netbeans / Zend Studio / Eclipse Version Control System: Mercurial / SVN FTP: Filezilla mostly / ... Passwords: KeePass js / ajax: jQuery / pure js / jQuery UI Framework:CI / Zend / pure php Database: MySQL / Other ORM: Framework layer db (Not an ORM I know but...) / Doctrine (2) / no ORM Debugging: Xdebug (PHP) / firebug (ajax/js/html/css/...) / framework profiler (stuff) / ... (x) Dreaming: About... Thinking: Not about chaos in ? direction .... n Anything else that comes to mind n+1 Zilion other stuff i know but i can't remember ... 8 some other stuff i (don't) remember i forgot, give up, delete, lost, said to myself never again, i haven't had time stuff, have on computer stuff but can't find or don't even know i have it on my computer at least 2-3 or more times, stuff I said to myself i'll check later and never checked again for all sort of "perfectly justified" reasons (time, memory, wife :), whatever,...), ... what is the reason i'm asking this?:) 8 and beyond looking forward to see a lot of answers ?

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • Should programmers itemize testing for projects? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • Identity in .NET 4.5&ndash;Part 1: Status Quo (Beta 1)

    - by Your DisplayName here!
    .NET 4.5 is a big release for claims-based identity. WIF becomes part of the base class library and structural classes like Claim, ClaimsPrincipal and ClaimsIdentity even go straight into mscorlib. You will be able to access all WIF functionality now from prominent namespaces like ‘System.Security.Claims’ and ‘System.IdentityModel’ (yay!). But it is more than simply merging assemblies; in fact claims are now a first class citizen in the whole .NET Framework. All built-in identity classes, like FormsIdentity for ASP.NET and WindowsIdentity now derive from ClaimsIdentity. Likewise all built-in principal classes like GenericPrincipal and WindowsPrincipal derive from ClaimsPrincipal. In other words, the moment you compile your .NET application against 4.5,  you are claims-based. That’s a big (and excellent) change.   While the classes are designed in a way that you won’t “feel” a difference by default, having the power of claims under the hood (and by default) will change the way how to design security features with the new .NET framework. I am currently doing a number of proof of concepts and will write about that in the future. There are a number of nice “little” features, like FindAll(), FindFirst(), HasClaim() methods on both ClaimsIdentity and ClaimsPrincipal. This makes querying claims much more streamlined. I also had to smile when I saw ClaimsPrincipal.Current (have a look at the code yourself) ;) With all the goodness also comes a number of breaking changes. I will write about that, too. In addition Vittorio announced just today the beta availability of a new wizard/configuration tool that makes it easier to do common things like federating with an IdP or creating a test STS. Go get the Beta and the tools and start writing claims-enabled applications! Interesting times ahead!

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • How do I enable sound with the "linux-virtual" kernel?

    - by Ola Tuvesson
    I've been trying to enable sound for the linux-virtual kernel as I want to run an ultra slim Ubuntu server under VirtualBox but need audio. The resource usage difference between virtual and generic/server is surprisingly large, with the virtual kernel system using 80Mb less RAM after a clean boot (130Mb vs 210Mb), and I really want to squeeze every clock cycle and available byte I can out of the system. Besides, the virtual kernel has some additional optimisations enabled specifically for virtual machines (or so I am told). Now I have compiled my own kernel a few times in the past, for example to include the Intel-PHC module (for improved power management on Thinkpads), so the concept is not entirely alien to me, but I've run into a strange problem which I'm hoping someone can help explain: When I do a diff between the config files for Linux-generic and Linux-virtual there are precious few differences, and certainly none which pertain to sound support; there are really only five or six lines which differ, and they're mainly to do with i/o timing, sleep state and priorities. What gives? I expected the differences to be extensive, and that I would be able to identify the options that enabled audio by looking at them, but my problem doesn't seem to be related to the config file at all (yes, I know about the sound drivers section - it is identical between the two kernel configs). Am I looking in the wrong place? Many thanks!

    Read the article

  • How to setup a Wireless Access-Point using my laptop's WiFi card?

    - by Abdul Karim Memon
    want to share my Laptops (running Ubuntu 10.10) Broadband with my Android (Galaxy Mini) running 2.2.1. Since Androids currently do not support ad-hoc networks so the "Create new wireless network.." won't help. Q1) How do i setup a Wireless Access Point using my Laptops WiFi card? Q2) What is the difference between an "ad-hoc" network and an "access point"? **abdulkarim@aK-laptop**:~$ lspci | grep ireless 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01) iw list Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 7935 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-15 Frequencies: * 2412 MHz [1] (20.0 dBm) * 2417 MHz [2] (20.0 dBm) * 2422 MHz [3] (20.0 dBm) * 2427 MHz [4] (20.0 dBm) * 2432 MHz [5] (20.0 dBm) * 2437 MHz [6] (20.0 dBm) * 2442 MHz [7] (20.0 dBm) * 2447 MHz [8] (20.0 dBm) * 2452 MHz [9] (20.0 dBm) * 2457 MHz [10] (20.0 dBm) * 2462 MHz [11] (20.0 dBm) * 2467 MHz [12] (20.0 dBm) (passive scanning) * 2472 MHz [13] (20.0 dBm) (passive scanning) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 Supported interface modes: * IBSS * managed * ** AP * AP/VLAN** * monitor * mesh point Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * Unknown command (55) * Unknown command (57) * Unknown command (59) * set_wiphy_netns * Unknown command (65) * connect * disconnect

    Read the article

  • Need assistance matching a general theme style as well as eCommerce capability

    - by humble_coder
    I'm in the process of acquiring a new design client. They are getting into the business of "auto parts wholesaling" and they want a storefront. My preference is/was to create something from scratch. However, here is an established trend in their particular market (similar parts, layout, etc). They insist on following the existing visual trend, as per the following: http://www.xtremediesel.com/ http://www.thoroughbreddiesel.com/ http://www.alligatorperformance.com/ My plan of attack at this point is to find a comparable WP theme and a flexible (but useful) backend/product management. Their current demo site (which their previous developer made a stab at) is using Pinnacle Cart. It is no where near what they need, nor is it intuitive to work with. I was actually considering Magento for its greater abilities but I'm still considering options. That said, my two primary dilemmas are as follows: 1) I need a theme that mimics the general style of those listed. They explicitly said they didn't want anything too clean (e.g. ThemeForest, Woothemes) as it "wasn't rugged or busy looking enough" for their field. 2) I need a WP/Magento/WP e-Commerce (or any one of a host of other) plugin that will allow for bulk import/update of nearly 200,000 products, descriptions and images. I'm not opposed to manually interfacing with the DB for import, but in the end, I need a store/system that doesn't needlessly add 50 tables to accommodate some "wet behind the ears" concept of table normalization and is easy to add to. Anyway, if anyone has any quality suggestions regarding either of these issues, it would be most appreciated. Best.

    Read the article

  • Please recommend a patterns book for iOS development

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Any guidance on the topic is very much appreciated.

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • What should be the architecture of an urban game system?

    - by pmichna
    I'm going to develop an urban game using a telco API for phone geolocation and sending/receiving messages. A player would pick up one of the scenarios, move around the city and when he hits a given location, he gets a message and possibly has to answer it. I'm wondering, what approach would be the best in my case. I came up with this general idea: Web application as a user interface (user registration, players ranking, scenarios editing) written in Ruby on Rails. Game server (hosting games, game logic like checking players location, sending and receiving messages) written in Ruby. Database (users, scores, scenarios etc.), probably MySQL or someother open source DB. I want to learn Ruby and RoR, that's why I chose these language and framework. Do you think it's a good choice for a game server? Another question: is this project division good? I mean, I have little experience with Ruby and Rails - that's why I'm asking. Maybe it's better to have web application merged with game server and somehow have the server hosting RoR application do the tasks like mobile phone pinging and message sending? How would that be performed? Maybe this is worth mentioning: the API is RESTful, most results are JSON, few are XML.

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • 150????????????????~2010?11??????????(????)

    - by Yusuke.Yamamoto
    2010?11???????????(????)?????????????????? ???????????Oracle Database 10g Release 2(10gR2)? Microsoft Windows Server 2008 R2 ?? Windows 7 ????????? ?????????????SQL Developer???????? ???????????????????????????? Oracle SQL Developer ?????150????????????????????? ????????????????????????????????????????? ?? ???? ???? ??? ??? Oracle 10gR2?Windows 2008R2/Windows 7??????? ??? ?????? Oracle???????·??????(????)??(2010?10?) ??? SQL Developer SQL Developer????~??!????????SQL???? New! ??? ????? Oracle ASM?1???? - ????????????·?????·?? ??? ??? ??????????????~2010?10??????????(???) New! ??? ??? Oracle?????(11gR2)????~DB??????·????????????! ??? ??????? LIKE??(?????????)??????????~Oracle Text????? ??? ??????? SQL*Loader???? New! ??? ??????·???? Oracle????????(2010?9?) ??? ??????? Oracle?????~??????????????????????3????????? New!

    Read the article

  • links for 2011-03-07

    - by Bob Rhubart
    DON CIO News: DON CIO Discusses Future IT Initiatives Audio links and a little background information on a recent town hall meetings hosted by Department of the Navy Chief Information Officer Terry Halvorsen. (tags: usgov usnavy cio enterprisearchitecture) Strassmann's Blog: Why So Many Data Centers? "The idea of datacenter consolidation involves much more that applying simple technical solutions." - Paul Strassmann (tags: enterprisearchitecture datacenter consolidation) Satyajith Nair: Coherence - The next big thing for the cloud!! "Disk-based computing is fraught with performance and management issues and doing away with Disks though not practical now, maybe true in the future. This also calls for a re-think of our current application architecture which is so focussed on disk-based persistence." - Satyajith Nair (tags: oracle infosys coherence grid cloud) TechCast: GlassFish Server and WebLogic - Interoperability and Integration - Oracle media - developer Fusion VP Development Anil Gaur and Product Manager Adam Leftik explain Oracle&#39;s strategy for creating increasing integration between GlassFish Server and Oracle WebLogic Server with an overview of new features and functionality for developers in GlassFish 3.1. (tags: ping.fm) Oracle Fusion and Oracle Fusion Applications : Overview | OracleApps Epicenter So WHAT IS ORACLE FUSION? People often get confuse with this term .To start with, it will be a good idea to know the difference between Fusion (tags: ping.fm) Marc Kelderman: OSB: Automatic update of Service Acounts Solution architect Marc Kelderman shares a work-around for using different Service Accounts for multiple environments. (tags: oracle otn sca bpel soa bpm servicebus) Perfect Integration 1 - Architectural Approach "First post in a series of 5-10, I will release all my views and opinions on the Art of Integration. I challenge you to disagree, and bash me with arguments and reasoning." -- Martijn Linssen (tags: enterprisearchitecture integration) Edwin Biemond: Set the Initial Focus on a component in a Page or a Fragment Edwin says: "This is not so hard to do, but sometimes it can be tricky to find the id of a component when you use regions ( Bounded Task Flows )." (tags: oracle otn oracleace java soa) Oracle Linux and Oracle Virtualization at Collaborate 2011 Information on more than 200 Oracle-hosted sessions with the latest insights and guidance from Oracle executives, product managers, and developers. (tags: oracle virtualization linux ioug oaug)

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Can AJAX in a CMS slow down your server

    - by Saif Bechan
    I am currently developing some plugins for WordPress, and I was wondering which route to take. Let's take an example, you want to display the last 3 tweets on your page. Option 1 You do things the normal way inside WordPress. Someone enters the website, while generating the page, you fetch the tweets in php via the twitter api, and just display them where you want. Now the small problem with this is, that you have to wait for the response from twitter. This takes a few ms. NO real problem, but this is question is just out of curiosity. Option 2 Here you don't do anything in WordPress on the initial load, but you do have the API inside. Now you just generate the page, and as soon as the page is done on the client side, you do a small AJAX call back to the server via a WordPress plugin, to fetch your latest tweets. Also called asynchronously. Now the problem with this IMO is that you have much more stress on your server. For starters you have two HTTP requests instead of one. Secondly the WordPress core has to load two times instead of one. Other options Now I know there are a lot of other options: 1) Getting the tweets directly via javascript, no stress on the server at all. 2) Cache the tweets so they are fetched from the DB instead of using the API every time. 3) Getting the tweets from an ajax call that is not a WordPress plugin. 4) Many more. My Question Now my question is if you only compare 1 and 2, which would be a better choice.

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >