Search Results

Search found 48589 results on 1944 pages for 'system requirements'.

Page 652/1944 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • Cannot access the filesystems using LiveCD (LVM2,EXT2)

    - by ftkg
    I have to restore the /etc/passwd file I accidentally renamed in my Ubuntu Server, so I booted the machine using a LiveCD. Problem is, the system filesystem does not appear in Nautilus, under 'Devices'. Am I missing anything? ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000956dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM ubuntu@ubuntu:~$ mount /cow on / type overlayfs (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) /dev/sr0 on /cdrom type iso9660 (ro,noatime) /dev/loop0 on /rofs type squashfs (ro,noatime) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/ubuntu/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=ubuntu) ubuntu@ubuntu:~$ sudo blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="aad69790-198d-45bc-9ccd-e4cba7456914" TYPE="ext2" /dev/sda5: UUID="wbIDX7-RILL-VtFT-gX15-N1GJ-Yyfg-V8Oe5m" TYPE="LVM2_member" /dev/sr0: LABEL="Ubuntu 12.04 LTS i386" TYPE="iso9660" ubuntu@ubuntu:~$ cat /etc/fstab overlayfs / overlayfs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0

    Read the article

  • Webcast Series Part II: Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif";} Register today for the second part of this webcast series on Thursday, November 29, 2012 10:00 a.m. PT/ 1:00 p.m. ET Project Portfolio Management solutions have immediate and lasting impact o both Provider’s and Contractor’s bottom lines by helping to manage the costs and risks of healthcare infrastructure projects from planning through handing-over and operating. During this Webcast, Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model, Garrett Harley and Thomas Koulouris will continue their discussion on Healthcare Infrastructure strategy changes and will cover the following topics: The shift in the Healthcare infrastructure strategy and how it will impact providers and contractors The Integrated Infrastructure & Lifecycle Solutions for Capital Assets and how these solutions help your business Communication and integration between providers and contractors and why it is so important to your bottom line The new integrated delivery system in Healthcare infrastructure and how Project Portfolio Management is so critical to the success of that system.

    Read the article

  • How to connect two ubuntu computers with ethernet cable

    - by Lukasz Zaroda
    I'm trying to connect with ethernet cable two computers - desktop and laptop. What I want to do is transfer a lot of data from one to another. The problem is that I'm doing everything from: How to network two Ubuntu computers using ethernet (without a router)? But after that, ping always gives me "Destination host unreachable". I was searching a while but couldn't figure out what is a reason it doesn't work, maybe it's something about my devices or maybe someone will have another idea. Ethernet cable I got with my router. There is a text printed on it: Aurit Data Cable Cat.5 UTP 26AWG 4PAIR AWM PUC 75°C EIA/TIA 568B It's connecting now my desktop to router, so I can send this question. My desktop: System: Ubuntu 12.04 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) "ethtool -i eth0" output: driver: r8169 version: 2.3LK-NAPI firmware-version: rtl_nic/rtl8168d-1.fw bus-info: 0000:01:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes My laptop: System: Ubuntu 14.04 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 08) "ethtool -i eth0" output: driver: alx version: firmware-version: bus-info: 0000:01:00.0 supports-statistics: no supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no My iptables are accepting everything. Any ideas why I cannot reach other computer?

    Read the article

  • Should I modify an entity with many parameters or with the entity itself?

    - by Saeed Neamati
    We have a SOA-based system. The service methods are like: UpdateEntity(Entity entity) For small entities, it's all fine. However, when entities get bigger and bigger, to update one property we should follow this pattern in UI: Get parameters from UI (user) Create an instance of the Entity, using those parameters Get the entity from service Write code to fill the unchanged properties Give the result entity to the service Another option that I've experienced in previous experiences is to create semantic update methods for each update scenario. In other words instead of having one global all-encompasing update method, we had many ad-hoc parametric methods. For example, for the User entity, instead of having UpdateUser (User user) method, we had these methods: ChangeUserPassword(int userId, string newPassword) AddEmailToUserAccount(int userId, string email) ChangeProfilePicture(int userId, Image image) ... Now, I don't know which method is truly better, and for each approach, we encounter problems. I mean, I'm going to design the infrastructure for a new system, and I don't have enough reasons to pick any of these approaches. I couldn't find good resources on the Internet, because of the lack of keywords I could provide. What approach is better? What pitfalls each has? What benefits can we get from each one?

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • Asus N46VZ prouduces more heat when on ubuntu compared to windows

    - by Blaze Tama
    First, i just used ubuntu as my main operating system, so please bear with me. My Asus N46VZ temps is pretty high when im on ubuntu (13.10). The temps are 60C even though i just open a browser. This does not happened on windows, where the temps is just around 50C. I did some research and some people said that the problem might be on my graphic card. I have an intel HD 4000 and GT 650m. So i tried to check my system settings and it said im using "Intel® Ivybridge Mobile". So, i tried to check the "additional drivers" tab on software & update settings but i found nothing there. The point is, i still dont know what make my laptop "overheating". The VGA part is just my guess (and is still dont understand what is the correlation between my VGA and the heat, and everything except the temp seems fine). Any help is appreciated. Thanks for your help :D

    Read the article

  • Finish feature reverted commits from develop

    - by marco-fiset
    I am using git as a version control system, and using git-flow as the branching model. I started a feature branch some weeks ago in order to maintain the system in a clean state while developping that feature. The main development continued on the develop branch, and changes from develop were merged periodically into the feature, to keep it up to date as much as possible. However came the time where the feature was finished, and I used git-flow's finish feature to merge the feature back into develop. The merge was successfully done, but then I found out that some of the commits I made in develop were reverted by the merge commit! Nowhere in develop or in the feature branch were these changes reverted, I can't see any commit that overwrote them. I just can't find anything. The only theory I have for the moment is that git is failing on me, but that would be extremely unlikely. Maybe I did some kind of wrong manipulation that made this situation come true? I can trace back in the history when the commit was made. I can see that the changes from that commit were reverted by the merge commit. Nowhere in the branch I see a commit that reverts those changes. Yet they were reverted. How is this even possible?

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • How do I get an application to appear as a choice in update-alternatives?

    - by Jay
    I separately installed the Firefox Beta and Alpha channels, and have desktop configuration files pointing to them in ~/.local/share/applications. However, stable Firefox is being used as my default browser by the system. (Firefox Beta used to be used until I messed with the "Default Applications" in System Settings, where it is not listed.) I tried running sudo update-alternatives --config x-www-browser to manually change it, but it's only recognizing Chromium and Firefox (stable) and showing them as a choice. What can I do to get custom desktop configuration files in ~/.local/share/applications to be seen as default alternatives? I think I may have to fiddle with the desktop config files, or with mimeinfo.cache or mimeapps.list? Running Oneiric. Here is the content of the firefox-beta.desktop file I created: [Desktop Entry] Name=Firefox Beta Exec=firefox-beta -P Beta -no-remote Icon=firefox Terminal=false X-MultipleArgs=false Type=Application StartupNotify=true StartupWMClass=Firefox Categories=GNOME;GTK;Network;WebBrowser; Comment[en_US]=Firefox Beta Channel MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;x-scheme-handler/chrome;video/webm; Name[en_US]=Firefox Beta [NewWindow Shortcut Group] Name=Open a New Window Exec=firefox-beta -new-window about:blank TargetEnvironment=Unity

    Read the article

  • Questions about Code Reviews

    - by bamboocha
    My team plans to do Code Review and asked me to make a concept what and how we are going to make our Code Reviews. We are a little group of 6 team members. We use an SVN repository and write programs in different languages (mostly: VB.NET, Java, C#), but the reviews should be also possible for others, yet not defined. Basically I am asking you, how are you doing it, to be more precise I made a list of some questions I got: 1. Peer Meetings vs Ticket System? Would you tend to do meetings with all members, rather than something like a ticket system, where the developer can add a new code change and some or all need to check and approve it? 1. What tool? I made some researches on my own and it showed that Rietveld seems to be the program to use for non-git solutions. Do you agree/disagree and why? 2. A good workflow to follow? 3. Are there good ways to minimize the effort for those meetings even more? 4. What are good questions, every code reviewer should follow? I already made a list with some questions, what would you append/remove? are there any magic numbers in the code? do all variable and method names make sense and are easily understandable? are all querys using prepared statement? are all objects disposed/closed when they are not needed anymore? 5. What are your general experiences with it? What's important? Things to consider/prevent/watch out?

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • A fresh install and clean up?

    - by wdypdx22
    I started with Ubuntu around 3 years ago and have been a dedicated user ever since. During that time I tried out lots of apps, themes, etc. And, I've updated every version as it has come along so now I'm running Lucid. Basically, my system has gotten sort of "messy" and I'm planning a vigorous clean up and a fresh install. My /home is on a separate partition from everything else, so I can preserve that. I want to find and remove unused, unneeded apps (which I pretty much understand how to do). Also, I want to get back to the default desktop theme and build back up from there. And other messes surely exist. So, my question is, What is a good, logical plan to clean up and freshly reinstall my system? (One note is that I have found many links in searches on this issue. There are many links on this topic and many are out of date. So, it's gotten rather confusing to say the least.) Thanks.

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • Benefits of classic OOP over Go-like language

    - by tylerl
    I've been thinking a lot about language design and what elements would be necessary for an "ideal" programming language, and studying Google's Go has led me to question a lot of otherwise common knowledge. Specifically, Go seems to have all of the interesting benefits from object oriented programming without actually having any of the structure of an object oriented language. There are no classes, only structures; there is no class/structure inheritance -- only structure embedding. There aren't any hierarchies, no parent classes, no explicit interface implementations. Instead, type casting rules are based on a loose system similar to duck-typing, such that if a struct implements the necessary elements of a "Reader" or a "Request" or an "Encoding", then you can cast it and use it as one. Does such a system obsolete the concept of OOP? Or is there something about OOP as implemented in C++ and Java and C# that is inherently more capable, more maintainable, somehow more powerful that you have to give up when moving to a language like Go? What benefit do you have to give up to gain the simplicity that this new paradigm represents?

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • How should I safely send bulk mail? [closed]

    - by Jerry Dodge
    First of all, we have a large software system we've developed and have a number of clients using it in their own environment. Each of them is responsible for using their own equipment and resources, we don't provide any services to share with them. We have introduced an automated email system which sends emails automatically via SMTP. Usually, it only sends around 10-20 emails a day, but it's very possible to send bulk email up to thousands of people in a single day. This of course requires a big haul of work, which isn't necessarily the problem. The issue arises when it comes to the SMTP server we're using. An email server is issued a number of relays a day, which is paid for. This isn't really necessarily the issue either. The risk is getting the email server blacklisted. It's inevitable, and we need to carefully take all this into consideration. As far as I can see, the ideal setup would be to have at least 50 IP addresses on multiple servers, each of which hosts its own SMTP server. When sending bulk email, it will divide them up across these servers, and each one will process its own queue. If one of those IP's gets blacklisted, it will be decommissioned and a new IP will replace it. Is there a better way that doesn't require us to invest in a large handful of servers? Perhaps a third party service which is meant exactly for this?

    Read the article

  • Upgrade to 12.04 results to empty Dash, no date & time either on the top panel

    - by Nicolas
    I've upgraded from Ubuntu netbook remix something to 12.04 LTS, and I've got two issues. (Got an Asus eeePc 32bits, Intel 945GME x86/MMX/SSE2 and Intel Atom CPU N270 @ 1.6Ghz x2) Nothing in the Dash. Only the "home" tab, other tabs are missing. No search results whatsoever. Missing elements in the system panel, privacy and date & time. No date & time on the right corner either. I've tried to reset unity with the terminal but the process was a whole mess full of errors. It did show date & time in the system panel (not on the top-right corner) while the process was going on in the terminal. But then it was such a mess (no more icons on the right corner amongst other things), and the process wouldn't complete, so I had to reboot the computer and get Unity as before, still no date & time and privacy.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • ASP.NET: Serializing and deserializing JSON objects

    - by DigiMortal
    ASP.NET offers very easy way to serialize objects to JSON format. Also it is easy to deserialize JSON objects using same library. In this posting I will show you how to serialize and deserialize JSON objects in ASP.NET. All required classes are located in System.Servicemodel.Web assembly. There is namespace called System.Runtime.Serialization.Json for JSON serializer. To serialize object to stream we can use the following code. var serializer = new DataContractJsonSerializer(typeof(MyClass)); serializer.WriteObject(myStream, myObject); To deserialize object from stream we can use the following code. CopyStream() is practically same as my Stream.CopyTo() extension method. var serializer = new DataContractJsonSerializer(typeof(MyClass));   using(var stream = response.GetResponseStream()) using (var ms = new MemoryStream()) {     CopyStream(stream, ms);     results = serializer.ReadObject(ms) as MyClass; } Why I copied data from response stream to memory stream? Point is simple – serializer uses some stream features that are not supported by response stream. Using memory stream we can deserialize object that came from web.

    Read the article

  • USB mass storage couldn't get mounted

    - by revo
    It's my android phone SD card which was indicated damaged by android yesterday night, out of the blue! I put it directly to a USB port with a USB SD card holder case, so in that way I can recover it with TestDisk, which I had experienced before on a similar situation. I also noticed that there is a change in file system and capacity: File System : RAW Capacity : 0 (unknown capacity) Also TestDisk doesn't show it on its partitions list. A 2 GB SD card is not that important in price but I've a lot of files and medias which I need them. Used a mini card reader, TestDisk displayed it on its list but a quick search and or a deep search doesn't have any results No partition found or selected for recovery and then I should quit the program. Your help is appreciated. Update #2 lsusb output: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 04f3:0234 Elan Microelectronics Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • Finding links among many written and spoken thoughts

    - by Peter Fren
    So... I am using a digital voice recorder to record anything I see important, ranging from business to private, from rants to new business ideas. Every finalized idea is one wma-file. I wrote a program to sort the wma-files into folders. From time to time I listen to the wma-files, convert them to text(manually) and insert them into a mindmap with mindmanager, which I sort hierarchically by area and type in turn. This works very well, no idea is being lost, when I am out of ideas for a special topic, I listen to what I said and can get started again. What could a search system look like that finds links between thoughts(written in the mindmap and in the wma files) or in general gives me good search results even when the keyword I searched for is not present but a synonym of it or related topic(for instance flower should output entries containing orchid aswell, even if they contain orchid but not the very keyword flower). I prefer something ready-made but small adjustments to a given system are fine aswell. How would you approach this task?

    Read the article

  • Installing Ubuntu 12.04 on a single GPT SSD which contains Windows 7

    - by Gary
    I recently bought a brand new 64 bit PC with a (ASUS) motherboard that supports UEFI and a GPT formatted 240Gb SSD, which contains Windows 7 in the first of 3 (80Gb) partitions. When the system arrived, it booted into Windows 7 like a dream, with no problems. I did not originally want Windows, but the manufacturer does not work with Linux (of any flavour), so I thought I would install Ubuntu into the second partition and dual boot. I downloaded the 12.04 64bit version and proceeded to install. Having selected to 'install', the screen became corrupted, with multicoloured garbage across the middle third of the screen !! So, I rebooted and --- MISSING OPERATING SYSTEM !!! The only way I can now get into Windows is via Super Grub2. First question - what went wrong ? 2nd - Will Ubuntu install on a GPT disk partition ? 3rd - Will it install alongside Windows 7 without screwing the boot mechanism ? 4th - How do I do it ? I have scoured the internet looking for appropriate answers and found NONE ! Please help.......

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >