Search Results

Search found 78101 results on 3125 pages for 'back up data'.

Page 232/3125 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Is it possible (and how if it is) dump two concatenaded disks in a new disk using DD?

    - by pedromarce
    Hi, I have a Lacie enclosure that has a setup with 2 500gb disks configured as 1 drive of 1TB, the only partition created for the whole drive is HFS+ journaled, but the controller in the enclosure is gone and so the drive refuses to mount anymore. I have been able to remove those two disks from the enclosure and connect them using USB ports and a program called R-studio (Raid recovery program) check that the setup the controller in the enclosure was using was both disks concatenated (Not Striped). And so configuring that option in R-studio I could be able to get back all the information. But before I got a license for r-studio for just one use, I would rather buying a new 1TB disk and try to write all the information of those two disks in this new one. I can use Mac or linux machines to do it, and I think it should be ok use DD command in linux to concatenate those two drives into the new one in the right order to get it working again in the new disk and I will reformat the old ones, but I am not sure. So, is it possible in this scenario to write both disks into a new one using DD? Any hints how the command would look? Thanks,

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Ways to use your skills as a developer to give back to the community/charities.

    - by Ryan Hayes
    Recently I came upon a community event called GiveCamp. GiveCamp is a weekend-long event where technology professionals from designers, developers and database administrators to marketers and web strategists donate their time to provide solutions for non-profit organizations. Since its inception in 2007, the GiveCamp program has provided benefits to over 150 charities, with a value of developer and designer time exceeding $1,000,000 in services! Coming from a very rural part of the country where there is a huge opportunity for charity events like this, it got me wondering. Are there other large movements like GiveCamp that are out there? GiveCamp is sponsored by Microsoft, so of course most are run through .NET user groups. Are there other flavors of it? Different types? Java/Python/other open source charity movements? If not, how do you give back?

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) -- Some links of similar problems https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/573318 http://www.linuxquestions.org/questions/linux-newbie-8/lamp-install-on-lucid-mysqld-sock-missing-mysql-terminating-status%3D1-853152/ It seems it's a permissions problem... But I don't know which permissions I should change... SOLVED -- mysql error 2002 "cannot connect to socket"

    Read the article

  • Accidentally uninstalled Ubuntu 14.04 system settings. How to get them back?

    - by Dan
    Somehow, cleaning up useless software (using software center), I uninstalled Ubuntu "system settings". I did this by mistake. Now I fail to find system settings application using software center (and there are so many items in history...). It seems strange to me because usually when I try to uninstall something critical (system testing for example), the dependencies manager tells me It will uninstall the whole desktop system then. I am sure I did not have that warning. So I need the name of the software to install or a command line command rather than a system restore to get it back. Very interesting thing. If you ever want to play with this and reproduce it, you will be confused to see Ubuntu Mobile system settings instead! Yes, mobile network and touchscreen settings! Happy pre-release viewing!

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    Hi all, I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)

    Read the article

  • Why do I get "ignoring out-of-zone data" when restarting BIND

    - by 6bytes
    I've been using my own DNS server but then I moved to a third part DNS provider. Yesterday I wanted to go back to using my own DNS's and cancel this third party service. I've lowered TTL in current DNS conf, changed DNS info in GoDaddy for my domain and that's when problems started. My domain seems to be working only for some people but not for others so clearly something is wrong. When restarting bind service named restart everything seems to be OK but later in email from Logwatch I'm getting errors like this: mydomain.com:30: ignoring out-of-zone data (ns1.mydns.com): 3 Time(s) mydomain.info:16: ignoring out-of-zone data (ns1.mydns.com): 5 Time(s) Can anyone point me in the right direction? My BIND configuration for those two domains below: File: /var/named/chroot/etc/zones.external zone "mydomain.com" IN { type master; file "mydomain.com"; allow-transfer { 213.251.188.140; }; allow-update { none; }; notify yes; also-notify { 213.251.188.140; }; }; zone "mydomain.info" IN { type master; file "mydomain.info"; allow-transfer { 213.251.188.140; }; allow-update { none; }; notify yes; also-notify { 213.251.188.140; }; }; File /var/named/chroot/var/named/mydomain.com being my main domain $TTL 3600 $ORIGIN mydomain.com. @ IN SOA ns1.mydns.com. ns2.mydns.com. ( 2010032101 ; Serial 10800 ; Refresh 3600 ; Retry 2419200 ; Expire 3600 ) ; NXDOMAIN TTL IN NS ns1.mydns.com. IN NS ns2.mydns.com. IN MX 10 ASPMX.L.GOOGLE.COM. IN MX 20 ALT1.ASPMX.L.GOOGLE.COM. IN MX 20 ALT2.ASPMX.L.GOOGLE.COM. IN MX 30 ASPMX2.GOOGLEMAIL.COM. IN MX 30 ASPMX3.GOOGLEMAIL.COM. IN MX 30 ASPMX4.GOOGLEMAIL.COM. IN MX 30 ASPMX5.GOOGLEMAIL.COM. IN A 111.111.111.111 * IN A 111.111.111.111 edu IN A 111.111.111.111 googleXXXXXXXXXXXXXXXX IN CNAME google.com. ns1.mydns.com. IN A 111.111.111.111 File /var/named/chroot/var/named/mydomain.info just an alias in apache for mydomain.com $TTL 86400 $ORIGIN mydomain.info. @ IN SOA ns1.mydns.com. ns2.mydns.com. ( 2009042901 ; Serial 10800 ; Refresh 3600 ; Retry 2419200 ; Expire 3600 ) ; NXDOMAIN TTL IN NS ns1.mydns.com. IN NS ns2.mydns.com. IN A 111.111.111.111 * IN A 111.111.111.111 ns1.mydns.com. IN A 111.111.111.111

    Read the article

  • projected textures not appear on the "back" of the mesh as well?

    - by user975135
    I want to create blood wounds on my character's bodies by using projected textures. I've watched some commentaries on games like Left 4 Dead and they say they use projected textures for the blood. But the way projected textures work is that if you project a texture on a rigged character, say his chest, it will also appear on his back. So what's the trick? How to get projected textures appear only on one "side" of the mesh? I use the Panda3D game engine, if that will help.

    Read the article

  • How do I remove nitrogen back to Ubuntu's default wallpaper manager?

    - by jonalmeida
    I was using nitrogen when I using openbox, but now I'm trying to stop it from setting the wallpaper so I can use the Ubuntu default (i.e via the System Settings panel). What I've done so far: sudo apt-get remove nitrogen Removed nitrogen configs at ~/.config/nitrogen/ Checked to make sure that draw_background in /desktop/gnome/background was checked So far it still doesn't work. I can't right-click on the desktop, so I'm guessing that I've missed something to get Nautilus back to drawing icons on the screen. Any help is much appreciated. Thanks!

    Read the article

  • I removed Ubuntu from the BIOS menu but it comes back.

    - by jimirings
    For reasons too long to explain, I reluctantly removed Ubuntu from my computer. After completely removing it and deleting the partition that it was installed onto, I discovered that I still had two Ubuntu entries in the boot order in my BIOS menu. I deleted them by following the instructions in this answer: http://askubuntu.com/a/63613/54934 As I was doing it, everything appeared to go smoothly. However, upon reboot one of them came back. What's going on here? How do I delete it permanently? I'll gladly provide any other information that may be needed to diagnose the problem. Thanks.

    Read the article

  • What exercises counter the back pain of sitting at a computer for 15 hours a day? [on hold]

    - by Sam
    I work up to 15 hours a day (not everyday :)) programming. When I do this I get a very sore lower back. I don't want to spend $1000 on a special chair like Joel Spolsky says. I'm sure I'm not alone here. Has anybody encountered this and found an exercise/other method to combat it? Maybe somebody with more physical education than me about opposing muscle groups or something. PS (not working fifteen hours a day isn't an answer, it's how I work best) and it's not off topic as programming is about more than code - it's a verb.

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • Coding with laptop and external screen - neck, back, comfort ...

    - by Xorty
    Guess I'm not the only one here coding on laptop + external keyboard + external screen. I can't really decide. Figure 1: Putting screen directly in front of (upright) my eyes and move laptop to the side. That feels like more comfortable but I can't really see so good to 15" laptop which is now quite away. Feels unused. Figure 2: Putting laptop in front of me and move external monitor on side. Feels like more efficient space usage, but I am afraid that my neck/back might start hurting since I need to turn my head often. What do you prefer? Some good advice? Sacrificing health is definitely no option here so that's why I'm worried and asking this silly question. Thanks

    Read the article

  • Get Your Google Back : Google ne veut pas perdre ses utilisateurs sur Windows 8 et explique comment retrouver son univers sous l'OS

    Get Your Google Back : Google ne veut pas perdre ses utilisateurs sur Windows 8 et explique comment retrouver son univers sous l'OS Windows 8 est disponible depuis quelques jours avec de nouveaux PC et tablettes sous le nouveau système d'exploitation de Microsoft. Les possesseurs d'une version de l'OS ont certainement remarqué la mise en avant de Bing et Internet Explorer 10 qui sont respectivement utilisés comme moteur de recherche et navigateur par défaut. Un geste qui ne doit certainement pas être vu d'un bon oeil par les éditeurs des solutions concurrentes. Google, pour sa part, a déjà réagi en mettant au point un moyen de faire retourner ses solutions sur Windows 8.

    Read the article

  • Laptop suspend appears broken, what must I roll back?

    - by M. Tibbits
    Question: What package contains the subprogram responsible for the KMenu option "sleep"? Background: I've been running KUbuntu 10.04.1 and I am completely updated. Recently (within the past month), the "sleep" menu item has stopped working. It just sits there waiting like I clicked nothing. I've check all of the logs in /var/log and nothing is added when I click sleep. I'm guessing that something I updated has bollux'ed things up, but I don't know which package contains the component that I need to roll back. In the meantime, I've installed uswsusp, but s2ram & s2both don't ask my password when the laptop resumes -- which really bothers me. So now that I've got a little time to track this down, I had to post -- any ideas??

    Read the article

  • Overly accessible and incredibly resource hungry relationships between business objects. How can I f

    - by Mike
    Hi, Firstly, This might seem like a long question. I don't think it is... The code is just an overview of what im currently doing. It doesn't feel right, so I am looking for constructive criticism and warnings for pitfalls and suggestions of what I can do. I have a database with business objects. I need to access properties of parent objects. I need to maintain some sort of state through business objects. If you look at the classes, I don't think that the access modifiers are right. I don't think its structured very well. Most of the relationships are modelled with public properties. SubAccount.Account.User.ID <-- all of those are public.. Is there a better way to model a relationship between classes than this so its not so "public"? The other part of this question is about resources: If I was to make a User.GetUserList() function that returns a List, and I had 9000 users, when I call the GetUsers method, it will make 9000 User objects and inside that it will make 9000 new AccountCollection objects. What can I do to make this project not so resource hungry? Please find the code below and rip it to shreds. public class User { public string ID {get;set;} public string FirstName {get; set;} public string LastName {get; set;} public string PhoneNo {get; set;} public AccountCollection accounts {get; set;} public User { accounts = new AccountCollection(this); } public static List<Users> GetUsers() { return Data.GetUsers(); } } public AccountCollection : IEnumerable<Account> { private User user; public AccountCollection(User user) { this.user = user; } public IEnumerable<Account> GetEnumerator() { return Data.GetAccounts(user); } } public class Account { public User User {get; set;} //This is public so that the subaccount can access its Account's User's ID public int ID; public string Name; public Account(User user) { this.user = user; } } public SubAccountCollection : IEnumerable<SubAccount> { public Account account {get; set;} public SubAccountCollection(Account account) { this.account = account; } public IEnumerable<SubAccount> GetEnumerator() { return Data.GetSubAccounts(account); } } public class SubAccount { public Account account {get; set;} //this is public so that my Data class can access the account, to get the account's user's ID. public SubAccount(Account account) { this.account = account; } public Report GenerateReport() { Data.GetReport(this); } } public static class Data { public static List<Account> GetSubAccounts(Account account) { using (var dc = new databaseDataContext()) { List<SubAccount> query = (from a in dc.Accounts where a.UserID == account.User.ID //this is getting the account's user's ID select new SubAccount(account) { ID = a.ID, Name = a.Name, }).ToList(); } } public static List<Account> GetAccounts(User user) { using (var dc = new databaseDataContext()) { List<Account> query = (from a in dc.Accounts where a.UserID == User.ID //this is getting the user's ID select new Account(user) { ID = a.ID, Name = a.Name, }).ToList(); } } public static Report GetReport(SubAccount subAccount) { Report report = new Report(); //database access code here //need to get the user id of the subaccount's account for data querying. //i've got the subaccount, but how should i get the user id. //i would imagine something like this: int accountID = subAccount.Account.User.ID; //but this would require the subaccount's Account property to be public. //i do not want this to be accessible from my other project (UI). //reading up on internal seems to do the trick, but within my code it still feels //public. I could restrict the property to read, and only private set. return report; } public static List<User> GetUsers() { using (var dc = new databaseDataContext()) { var query = (from u in dc.Users select new User { ID = u.ID, FirstName = u.FirstName, LastName = u.LastName, PhoneNo = u.PhoneNo }).ToList(); return query; } } }

    Read the article

  • Client no longer getting data from Web Service after introducing targetNamespace in XSD

    - by Laurence
    Sorry if there is way too much info in this post – there’s a load of story before I get to the actual problem. I thought I‘d include everything that might be relevant as I don’t have much clue what is wrong. I had a working web service and client (both written with VS 2008 in C#) for passing product data to an e-commerce site. The XSD started like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="sec" minOccurs="1" maxOccurs="1"/> </xs:sequence> etc Here’s a sample document sent from client to service: <eur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T17:16:34.523" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> Then, I had to give the service a targetNamespace. Actually I don’t know if I “had” to set it, but I added (to the same VS project) some code to act as a client to a completely unrelated service (which also had no namespace), and the project would not build until I gave my service a namespace. Now the XSD starts like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.company.com/ecommerce" xmlns:ecom="http://www. company.com/ecommerce"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="ecom:sec" minOccurs="1" maxOccurs="1" /> </xs:sequence> etc As you can see above I also updated all the xs:element ref attributes to give them the “ecom” prefix. Now the project builds again. I found the client needed some modification after this. The client uses a SQL stored procedure to generate the XML. This is then de-serialised into an object of the correct type for the service’s “get_data” method. The object’s type used to be “eur” but after updating the web reference to the service, it became “get_dataEur”. And sure enough the parent element in the XML had to be changed to “get_dataEur” to be accepted. Then bizarrely I also had to put the xmlns attribute containing my namespace on the “sec” element (the immediate child of the parent element) rather than the parent element. Here’s a sample document now sent from client to service: <get_dataEur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec xmlns="http://www.company.com/ecommerce" guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </get_dataEur> If in the service’s get_data method I then serialize the incoming object I see this (the parent element is “eur” and the xmlns attribute is on the parent element): <eur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.company.com/ecommerce" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> The service then prepares a reply to go back to the client. The XML looks like this (the important data being sent back is the date_stamp attribute in the last_sent element): <eur xmlns="http://www.company.com/ecommerce" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.530" version="1.1"> <sec version="1.1" xmlns=""> <data> <last_sent date_stamp="2010-02-25T15:15:10.193" /> </data> </sec> </eur> Now finally, here’s the problem!!! The client does not see any data – all it sees is the parent element with nothing inside it. If I serialize the reply object in the client code it looks like this: <get_dataResponseEur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.53" version="1.1" /> So, my questions are: why isn’t my client seeing the contents of the reply document? how do I fix it? why do I have to put the xmlns attribute on a child element rather than the parent element in the outgoing document? Here’s a bit more possibly relevant info: The client code (pre-namespace) called the service method like this: XmlSerializer serializer = new XmlSerializer(typeof(eur)); XmlReader reader = xml.CreateReader(); eur eur = (eur)serializer.Deserialize(reader); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(ref eur); After the namespace was added I had to change it to this: XmlSerializer serializer = new XmlSerializer(typeof(get_dataEur)); XmlReader reader = xml.CreateReader(); get_dataEur eur = (get_dataEur)serializer.Deserialize(reader); get_dataResponseEur eur1 = new get_dataResponseEur(); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(eur, out eur1);

    Read the article

  • NSOutlineView not refreshing when objects added to managed object context from NSOperations

    - by John Gallagher
    Background Cocoa app using core data Two processes - daemon and a main UI Daemon constantly writing to a data store UI process reads from same data store NSOutlineView in UI is bound to an NSTreeController which is bound to Application with key path of delegate.interpretedMOC What I want When the UI is activated, the outline view should update with the latest data inserted by the daemon. The Problem Main Thread Approach I fetch all the entities I'm interested in, then iterate over them, doing refreshObject:mergeChanges:YES. This works OK - the items get refreshed correctly. However, this is all running on the main thread, so the UI locks up for 10-20 seconds whilst it refreshes. Fine, so let's move these refreshes to NSOperations that run in the background instead. NSOperation Multithreaded Approach As soon as I move the refreshObject:mergeChanges: call into an NSOperation, the refresh no longer works. When I add logging messages, it's clear that the new objects are loaded in by the NSOperation subclass and refreshed. Not only that, but they are What I've tried I've messed around with this for 2 days solid and tried everything I can think of. Passing objectIDs to the NSOperation to refresh instead of an entity name. Resetting the interpretedMOC at various points - after the data refresh and before the outline view reload. I'd subclassed NSOutlineView. I discarded my subclass and set the view back to being an instance of NSOutlineView, just in case there was any funny goings on here. Added a rearrangeObjects call to the NSTreeController before reloading the NSOutlineView data. Made sure I had set the staleness interval to 0 on all managed object contexts I was using. I've got a feeling this problem is somehow related to caching core data objects in memory. But I've totally exhausted all my ideas on how I get this to work. I'd be eternally grateful of any ideas anyone else has. Code Main Thread Approach // In App Delegate -(void)applicationDidBecomeActive:(NSNotification *)notification { // Delay to allow time for the daemon to save [self performSelector:@selector(refreshTrainingEntriesAndGroups) withObject:nil afterDelay:3]; } -(void)refreshTrainingEntriesAndGroups { NSSet *allTrainingGroups = [[[NSApp delegate] interpretedMOC] fetchAllObjectsForEntityName:kTrainingGroup]; for(JGTrainingGroup *thisTrainingGroup in allTrainingGroups) [interpretedMOC refreshObject:thisTrainingGroup mergeChanges:YES]; NSError *saveError = nil; [interpretedMOC save:&saveError]; [windowController performSelectorOnMainThread:@selector(refreshTrainingView) withObject:nil waitUntilDone:YES]; } // In window controller class -(void)refreshTrainingView { [trainingViewTreeController rearrangeObjects]; // Didn't really expect this to have any effect. And it didn't. [trainingView reloadData]; } NSOperation Multithreaded Approach // In App Delegate -(void)refreshTrainingEntriesAndGroups { JGRefreshEntityOperation *trainingGroupRefresh = [[JGRefreshEntityOperation alloc] initWithEntityName:kTrainingGroup]; NSOperationQueue *refreshQueue = [[NSOperationQueue alloc] init]; [refreshQueue setMaxConcurrentOperationCount:1]; [refreshQueue addOperation:trainingGroupRefresh]; while ([[refreshQueue operations] count] > 0) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.05]]; [windowController performSelectorOnMainThread:@selector(refreshTrainingView) withObject:nil waitUntilDone:YES]; } // JGRefreshEntityOperation.m @implementation JGRefreshEntityOperation @synthesize started; @synthesize executing; @synthesize paused; @synthesize finished; -(void)main { [self startOperation]; NSSet *allEntities = [imoc fetchAllObjectsForEntityName:entityName]; for(id thisEntity in allEntities) [imoc refreshObject:thisEntity mergeChanges:YES]; [self finishOperation]; } -(void)startOperation { [self willChangeValueForKey:@"isExecuting"]; [self willChangeValueForKey:@"isStarted"]; [self setStarted:YES]; [self setExecuting:YES]; [self didChangeValueForKey:@"isExecuting"]; [self didChangeValueForKey:@"isStarted"]; imoc = [[NSManagedObjectContext alloc] init]; [imoc setStalenessInterval:0]; [imoc setUndoManager:nil]; [imoc setPersistentStoreCoordinator:[[NSApp delegate] interpretedPSC]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(mergeChanges:) name:NSManagedObjectContextDidSaveNotification object:imoc]; } -(void)finishOperation { saveError = nil; [imoc save:&saveError]; if (saveError) { NSLog(@"Error saving. %@", saveError); } imoc = nil; [self willChangeValueForKey:@"isExecuting"]; [self willChangeValueForKey:@"isFinished"]; [self setExecuting:NO]; [self setFinished:YES]; [self didChangeValueForKey:@"isExecuting"]; [self didChangeValueForKey:@"isFinished"]; } -(void)mergeChanges:(NSNotification *)notification { NSManagedObjectContext *mainContext = [[NSApp delegate] interpretedMOC]; [mainContext performSelectorOnMainThread:@selector(mergeChangesFromContextDidSaveNotification:) withObject:notification waitUntilDone:YES]; } -(id)initWithEntityName:(NSString *)entityName_ { [super init]; [self setStarted:false]; [self setExecuting:false]; [self setPaused:false]; [self setFinished:false]; [NSThread setThreadPriority:0.0]; entityName = entityName_; return self; } @end // JGRefreshEntityOperation.h @interface JGRefreshEntityOperation : NSOperation { NSString *entityName; NSManagedObjectContext *imoc; NSError *saveError; BOOL started; BOOL executing; BOOL paused; BOOL finished; } @property(readwrite, getter=isStarted) BOOL started; @property(readwrite, getter=isPaused) BOOL paused; @property(readwrite, getter=isExecuting) BOOL executing; @property(readwrite, getter=isFinished) BOOL finished; -(void)startOperation; -(void)finishOperation; -(id)initWithEntityName:(NSString *)entityName_; -(void)mergeChanges:(NSNotification *)notification; @end

    Read the article

  • Network Data Packet connectivity intent

    - by Rakesh
    I am writing an Android application which can enable and disable the Network Data packet connection. I am also using one broadcast receiver to check the Network Data packet connection. I have registered broadcast receiver and provided required permission in Manifest file. But when I run this application it changes the connection state and after that it crashes. But when I don't include this broadcast receiver it works fine. I am not able to see any kind of log which can provide some clue. Here is my code for broadcast receiver. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.rakesh.simplewidget" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="10" /> <!-- Permissions --> <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permission android:name="android.permission.MODIFY_PHONE_STATE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".SimpleWidgetExampleActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <!-- <receiver android:name=".ExampleAppWidgetProvider" android:label="Widget ErrorBuster" > <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget1_info" /> </receiver> --> <receiver android:name=".ConnectivityReceiver" > <intent-filter> <action android:name="android.net.conn.CONNECTIVITY_CHANGE" /> </intent-filter> </receiver> </application> </manifest> My Broadcast receiver class is as following. import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.net.ConnectivityManager; import android.net.NetworkInfo; import android.util.Log; public class ConnectivityReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { NetworkInfo info = (NetworkInfo)intent.getParcelableExtra(ConnectivityManager.EXTRA_NETWORK_INFO); if(info.getType() == ConnectivityManager.TYPE_MOBILE){ if(info.isConnectedOrConnecting()){ Log.e("RK","Mobile data is connected"); }else{ Log.e("RK","Mobile data is disconnected"); } } } } my Main activity file. package com.rakesh.simplewidget; import java.lang.reflect.Field; import java.lang.reflect.Method; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.graphics.Color; import android.net.ConnectivityManager; import android.os.Bundle; import android.telephony.TelephonyManager; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.Toast; public class SimpleWidgetExampleActivity extends Activity { private Button btNetworkSetting; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btNetworkSetting = (Button)findViewById(R.id.btNetworkSetting); if(checkConnectivityState(getApplicationContext())){ btNetworkSetting.setBackgroundColor(Color.GREEN); }else{ btNetworkSetting.setBackgroundColor(Color.GRAY); } } public void openNetworkSetting(View view){ Method dataConnSwitchmethod; Class telephonyManagerClass; Object ITelephonyStub; Class ITelephonyClass; Context context = view.getContext(); boolean enabled = !checkConnectivityState(context); final ConnectivityManager conman = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); try{ final Class conmanClass = Class.forName(conman.getClass().getName()); final Field iConnectivityManagerField = conmanClass.getDeclaredField("mService"); iConnectivityManagerField.setAccessible(true); final Object iConnectivityManager = iConnectivityManagerField.get(conman); final Class iConnectivityManagerClass = Class.forName(iConnectivityManager.getClass().getName()); final Method setMobileDataEnabledMethod = iConnectivityManagerClass.getDeclaredMethod("setMobileDataEnabled", Boolean.TYPE); setMobileDataEnabledMethod.setAccessible(true); setMobileDataEnabledMethod.invoke(iConnectivityManager, enabled); if(enabled){ Toast.makeText(view.getContext(), "Enabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.GREEN); } else{ Toast.makeText(view.getContext(), "Disabled Network Data", Toast.LENGTH_LONG).show(); view.setBackgroundColor(Color.LTGRAY); } }catch(Exception e){ Log.e("Error", "some error"); Toast.makeText(view.getContext(), "It didn't work", Toast.LENGTH_LONG).show(); } } private boolean checkConnectivityState(Context context){ final TelephonyManager telephonyManager = (TelephonyManager) context .getSystemService(Context.TELEPHONY_SERVICE); ConnectivityManager af ; return telephonyManager.getDataState() == TelephonyManager.DATA_CONNECTED; } } Log file: java.lang.RuntimeException: Unable to instantiate receiver com.rakesh.simplewidget.ConnectivityReceiver: java.lang.ClassNotFoundException: com.rakesh.simplewidget.ConnectivityReceiver in loader dalvik.system.PathClassLoader[/data/app/com.rakesh.simplewidget-2.apk] E/AndroidRuntime(26094): at android.app.ActivityThread.handleReceiver(ActivityThread.java:1777) E/AndroidRuntime(26094): at android.app.ActivityThread.access$2400(ActivityThread.java:117) E/AndroidRuntime(26094): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:985) E/AndroidRuntime(26094): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime(26094): at android.os.Looper.loop(Looper.java:130) E/AndroidRuntime(26094): at android.app.ActivityThread.main(ActivityThread.java:3691) E/AndroidRuntime(26094): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime(26094): at java.lang.reflect.Method.invoke(Method.java:507) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) E/AndroidRuntime(26094): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) E/AndroidRuntime(26094): at dalvik.system.NativeStart.main(Native Method) It seems Android is not able to recognize file Broadcast Receiver class. Any idea why I am getting this error? PS: Some information about Android environment and platform. - Android API 10. - Running on Samsung Galaxy II which has android 2.3.6 Edit: my broadcast receiver file ConnectivityReceiver.java was present in default package and it was not being recognized by Android. Android was looking for this file in current package i.e com.rakesh.simplewidget; I just moved connectivityReciever.java file to com.rakesh.simplewidget package and problem was solved.

    Read the article

  • Not able to get data from Json completely

    - by Abhinav Raja
    i am getting JSON data from http://abinet.org/?json=1 and displaying the titles in a ListView. the code is working fine but the problem is, it is skipping few titles in my ListView and one title is being repeated. You can see the json data from url given above by copy paste it in JSON editor online http://www.jsoneditoronline.org/ i want titles in the "posts" array to be displayed in ListView, however it is being displayed like this: if you see the JSON data from the link above, its missing like 3 titles (they should come between the first and second title) and 5th title is being repeated. Dont know why this is happening. What minor adjustments i need to do? Please help me. this is my code : public class MainActivity extends Activity { // URL to get contacts JSON private static String url = "http://abinet.org/?json=1"; // JSON Node names private static final String TAG_POSTS = "posts"; static final String TAG_TITLE = "title"; private ProgressDialog pDialog; JSONArray contacts = null; TextView img_url; ArrayList<HashMap<String, Object>> contactList; ListView lv; LazyAdapter adapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); lv = (ListView) findViewById(R.id.newslist); contactList = new ArrayList<HashMap<String, Object>>(); new GetContacts().execute(); } private class GetContacts extends AsyncTask<Void, Void, Void> { protected void onPreExecute() { super.onPreExecute(); // Showing progress dialog pDialog = new ProgressDialog(MainActivity.this); pDialog.setMessage("Please wait..."); pDialog.setCancelable(false); pDialog.show(); } protected Void doInBackground(Void... arg0) { // Making a request to url and getting response JSONParser jParser = new JSONParser(); // Getting JSON from URL JSONObject jsonObj = jParser.getJSONFromUrl(url); // if (jsonStr != null) { try { // Getting JSON Array node contacts = jsonObj.getJSONArray(TAG_POSTS); // looping through All Contacts for (int i = 0; i < contacts.length(); i++) { // JSONObject c = contacts.getJSONObject(i); JSONObject posts = contacts.getJSONObject(i); String title = posts.getString(TAG_TITLE).replace("&#8217;", "'"); JSONArray attachment = posts.getJSONArray("attachments"); for (int j = 0; j< attachment.length(); j++){ JSONObject obj = attachment.getJSONObject(j); JSONObject image = obj.getJSONObject("images"); JSONObject image_small = image.getJSONObject("thumbnail"); String imgurl = image_small.getString("url"); HashMap<String, Object> contact = new HashMap<String, Object>(); contact.put("image_url", imgurl); contact.put(TAG_TITLE, title); contactList.add(contact); } } } catch (JSONException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); // Dismiss the progress dialog if (pDialog.isShowing()) pDialog.dismiss(); adapter=new LazyAdapter(MainActivity.this, contactList); lv.setAdapter(adapter); } } } this is my JsonParser class (although its not required): public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this is adapter class: public class LazyAdapter extends BaseAdapter { private Activity activity; private ArrayList<HashMap<String, Object>> data; private static LayoutInflater inflater=null; public LazyAdapter(Activity a,ArrayList<HashMap<String, Object>> d) { activity = a; data=d; inflater = (LayoutInflater)activity.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } public int getCount() { return data.size(); } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { View vi=convertView; if(convertView==null) vi = inflater.inflate(R.layout.third_row, null); TextView title = (TextView)vi.findViewById(R.id.headline3); // title SmartImageView iv = (SmartImageView) vi.findViewById(R.id.imageicon); HashMap<String, Object> song = new HashMap<String, Object>(); song = data.get(position); // Setting all values in listview title.setText((CharSequence) song.get(MainActivity.TAG_TITLE)); iv.setImageUrl((String) song.get("image_url")); thumb_image); return vi; } } Please help me. I am stuck at this for more than a week now. I think there is just something to be changed in my MainActivity class.

    Read the article

  • PHP, MySQL, jQuery, AJAX: json data returns correct response but frontend returns error

    - by Devner
    Hi all, I have a user registration form. I am doing server side validation on the fly via AJAX. The quick summary of my problem is that upon validating 2 fields, I get error for the second field validation. If I comment first field, then the 2nd field does not show any error. It has this weird behavior. More details below: The HTML, JS and Php code are below: HTML FORM: <form id="SignupForm" action=""> <fieldset> <legend>Free Signup</legend> <label for="username">Username</label> <input name="username" type="text" id="username" /><span id="status_username"></span><br /> <label for="email">Email</label> <input name="email" type="text" id="email" /><span id="status_email"></span><br /> <label for="confirm_email">Confirm Email</label> <input name="confirm_email" type="text" id="confirm_email" /><span id="status_confirm_email"></span><br /> </fieldset> <p> <input id="sbt" type="button" value="Submit form" /> </p> </form> JS: <script type="text/javascript"> $(document).ready(function() { $("#email").blur(function() { var email = $("#email").val(); var msgbox2 = $("#status_email"); if(email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: "email="+ email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } }); } return false; }); $("#confirm_email").blur(function() { var confirm_email = $("#confirm_email").val(); var email = $("#email").val(); var msgbox3 = $("#status_confirm_email"); if(confirm_email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: 'confirm_email='+ confirm_email + '&email=' + email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } , error: function (data) { alert('Some error'); } }); } return false; }); }); </script> PHP code: <?php //check_ajax2.php if(isset($_POST['email'])) { $email = $_POST['email']; $res = mysql_query("SELECT uid FROM members WHERE email = '$email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { $success = 'y'; $msg_email = 'Email available'; } else { $success = 'n'; $msg_email = 'Email is already in use.</font>'; } print json_encode(array('success' => $success, 'msg_email' => $msg_email)); } if(isset($_POST['confirm_email'])) { $confirm_email = $_POST['confirm_email']; $email = ( isset($_POST['email']) && trim($_POST['email']) != '' ? $_POST['email'] : '' ); $res = mysql_query("SELECT uid FROM members WHERE email = '$confirm_email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { if( isset($email) && isset($confirm_email) && $email == $confirm_email ) { $success = 'y'; $msg_confirm_email = 'Email available and match'; } else { $success = 'n'; $msg_confirm_email = 'Email and Confirm Email do NOT match.'; } } else { $success = 'n'; $msg_confirm_email = 'Email already exists.'; } print json_encode(array('success' => $success, 'msg_confirm_email' => $msg_confirm_email)); } ?> THE PROBLEM: As long as I am validating the $_POST['email'] as well as $_POST['confirm_email'] in the check_ajax2.php file, the validation for confirm_email field always returns an error. With my limited knowledge of Firebug, however, I did find out that the following were the responses when I entered email and confirm_email in the fields: RESPONSE 1: {"success":"y","msg_email":"Email available"} RESPONSE 2: {"success":"y","msg_email":"Email available"}{"success":"n","msg_confirm_email":"Email and Confirm Email do NOT match."} Although the RESPONSE 2 shows that we are receiving the correct message via msg_confirm_email, in the front end, the alert 'Some error' is popping up (I have enabled the alert for debugging). I have spent 48 hours trying to change every part of the code wherever possible, but with only little success. What is weird about this is that if I comment the validation for $_POST['email'] field completely, then the validation for $_POST['confirm_email'] field is displaying correctly without any errors. If I enable it back, it is validating email field correctly, but when it reaches the point of validating confirm_email field, it is again showing me the error. I have also tried renaming success variable in check_ajax2.php page to other different names for both $_POST['email'] and $_POST['confirm_email'] but no success. I will be adding more fields in the form and validating within the check_ajax2.php page. So I am not planning on using different ajax pages for validating each of those fields (and I don't think it's smart to do it that way). I am not a jquery or AJAX guru, so all help in resolving this issue is highly appreciated. Thank you in advance.

    Read the article

  • Dropped hard drive won't mount

    - by Dave DeLong
    I have a 2 TB HFS+-formatted external hard drive that got dropped a couple of days ago while transferring files onto a Macbook Pro. Now the drive's partitions won't mount. Disk Utility can see the drive, but doesn't recognize that it has any partitions. I've tried using Data Rescue 2 to recover files off of it, but it couldn't find anything. In addition, our local computer repair shop said they couldn't find anything on there either. I know that I could ship the drive off to someone like DriveSavers, but I was hoping for a cheaper option (since they start at about $500 for the attempt). Is there something else I could try on my own? Would TestDisk be able to help with something like this?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >