Search Results

Search found 18135 results on 726 pages for 'shared objects'.

Page 151/726 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Efficiency Question for an Ajax App

    - by Kubi
    Hi, Currently I am dealing with a web application which uses a txt file as a database for testing for now. But we will connect it to a server later on. My question is, if there is a more efficient way to get my objects than the way I am using now. During the page_init I am getting all my objects into a Collection as List, then I am populating the ajax toolkit accordion objects in the page with that. I have some client side buttons which fires callbacks for getting some other objects to populate the accordions in an update panel. And I am using .net Collections too much like dictionary and list, I am wondering if using arrays is more efficient. Could you advise me about how to make this site better and faster ? Is it better or possible to initialize those TravelP objects in javascript at the beginning and use it like that ? Any comments would be greatly appreciated, Thanks

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Help with force close occurrences in my app

    - by Ken
    This is the last issue with this app. Periodic force close situations. I think something should be on another thread but I'm not sure what. Anyway, I can always count on a freeze on first install. If I wait, eventually (maybe 10 seconds) the app comes around, maybe more. here is an excerpt from logcat--the three lines occur after full layout is displayed and I attempt to touch a [game] 'peg' which should spawn a sprite, but the freeze occurs there. Can anybody tell what the issue might be?: I/System.out( 279): TouchDown (17.0,106.0) I/System.out( 279): checking (17,106 I/System.out( 279): hit for bounds Rect(3, 98 - 32, 130) [FREEZE BEGINS] W/webcore ( 279): Can't get the viewWidth after the first layout W/WindowManager( 60): Key dispatching timed out sending to com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree W/WindowManager( 60): Previous dispatch state: null W/WindowManager( 60): Current dispatch state: {{null to Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} @ 1295232880017 lw=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} lb=android.os.BinderProxy@440523b8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false}}} I/Process ( 60): Sending signal. PID: 279 SIG: 3 I/dalvikvm( 279): threadid=3: reacting to signal 3 D/dalvikvm( 124): GC_EXPLICIT freed 1754 objects / 106104 bytes in 7365ms I/Process ( 60): Sending signal. PID: 60 SIG: 3 I/dalvikvm( 60): threadid=3: reacting to signal 3 I/dalvikvm( 60): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 263 SIG: 3 I/dalvikvm( 263): threadid=3: reacting to signal 3 I/dalvikvm( 279): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 117 SIG: 3 I/dalvikvm( 117): threadid=3: reacting to signal 3 I/dalvikvm( 117): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 254 SIG: 3 I/Process ( 60): Sending signal. PID: 121 SIG: 3 I/dalvikvm( 121): threadid=3: reacting to signal 3 D/AudioSink( 34): bufferCount (4) is too small and increased to 12 I/System.out( 279): making white sprite I/Process ( 60): Sending signal. PID: 186 SIG: 3 I/Process ( 60): Sending signal. PID: 232 SIG: 3 D/MillennialMediaAdSDK( 279): size: 1 D/MillennialMediaAdSDK( 279): num: 1 D/AdWhirl SDK( 279): Millennial success D/AdWhirl SDK( 279): Will call rotateAd() in 120 seconds I/dalvikvm( 232): threadid=3: reacting to signal 3 I/dalvikvm( 121): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 222 SIG: 3 I/MillennialMediaAdSDK( 279): Millennial ad return success D/MillennialMediaAdSDK( 279): View height: 0 D/MillennialMediaAdSDK( 279): nextUrl: [deleted] I/Process ( 60): Sending signal. PID: 239 SIG: 3 I/Process ( 60): Sending signal. PID: 213 SIG: 3 D/AdWhirl SDK( 279): Added subview D/AdWhirl SDK( 279): Pinging URL: [deleted] I/Process ( 60): Sending signal. PID: 197 SIG: 3 I/dalvikvm( 197): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 164 SIG: 3 I/dalvikvm( 164): threadid=3: reacting to signal 3 D/dalvikvm( 279): GC_FOR_MALLOC freed 7735 objects / 639688 bytes in 217ms I/Process ( 60): Sending signal. PID: 124 SIG: 3 I/dalvikvm( 124): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 158 SIG: 3 I/dalvikvm( 158): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 127 SIG: 3 E/ActivityManager( 60): ANR in com.live.brainbuilderfree (com.live.brainbuilderfree/.BrainBuilderFree) E/ActivityManager( 60): Reason: keyDispatchingTimedOut E/ActivityManager( 60): Load: 3.46 / 1.69 / 0.65 E/ActivityManager( 60): CPU usage from 28095ms to 140ms ago: E/ActivityManager( 60): system_server: 30% = 25% user + 4% kernel / faults: 3119 minor 66 major E/ActivityManager( 60): mediaserver: 11% = 7% user + 4% kernel / faults: 746 minor 17 major E/ActivityManager( 60): com.svox.pico: 1% = 0% user + 1% kernel / faults: 2833 minor 8 major E/ActivityManager( 60): d.process.acore: 1% = 0% user + 0% kernel / faults: 1146 minor 36 major E/ActivityManager( 60): ndroid.launcher: 1% = 0% user + 0% kernel / faults: 852 minor 6 major E/ActivityManager( 60): m.android.phone: 0% = 0% user + 0% kernel / faults: 621 minor 7 major E/ActivityManager( 60): kswapd0: 0% = 0% user + 0% kernel E/ActivityManager( 60): ronsoft.openwnn: 0% = 0% user + 0% kernel / faults: 337 minor 2 major E/ActivityManager( 60): adbd: 0% = 0% user + 0% kernel / faults: 3 minor E/ActivityManager( 60): zygote: 0% = 0% user + 0% kernel / faults: 169 minor E/ActivityManager( 60): events/0: 0% = 0% user + 0% kernel E/ActivityManager( 60): rild: 0% = 0% user + 0% kernel / faults: 103 minor 3 major E/ActivityManager( 60): pdflush: 0% = 0% user + 0% kernel E/ActivityManager( 60): .quicksearchbox: 0% = 0% user + 0% kernel / faults: 61 minor E/ActivityManager( 60): id.defcontainer: 0% = 0% user + 0% kernel / faults: 12 minor E/ActivityManager( 60): +rainbuilderfree: 0% = 0% user + 0% kernel E/ActivityManager( 60): +sh: 0% = 0% user + 0% kernel E/ActivityManager( 60): +app_process: 0% = 0% user + 0% kernel E/ActivityManager( 60): TOTAL: 100% = 76% user + 21% kernel + 2% iowait + 0% irq + 0% softirq I/dalvikvm( 127): threadid=3: reacting to signal 3 I/dalvikvm( 186): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 3747 objects / 228920 bytes in 609ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.759MB for 36896-byte allocation I/dalvikvm( 239): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 226 objects / 9952 bytes in 546ms I/dalvikvm( 213): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 5816 bytes in 492ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.815MB for 49188-byte allocation I/dalvikvm( 222): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 77 objects / 5232 bytes in 546ms I/dalvikvm( 254): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 55856 bytes in 521ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.876MB for 98360-byte allocation D/dalvikvm( 60): GC_FOR_MALLOC freed 58 objects / 3632 bytes in 340ms D/dalvikvm( 60): GC_FOR_MALLOC freed 1093 objects / 185256 bytes in 572ms W/WindowManager( 60): Continuing to wait for key to be dispatched I/System.out( 279): TouchMove (117.0,124.0) I/System.out( 279): TouchUP (117.0,124.0) D/dalvikvm( 60): GC_FOR_MALLOC freed 141 objects / 108328 bytes in 564ms I/ARMAssembler( 60): generated scanline__00000077:03515104_00000000_00000000 [ 33 ipp] (47 ins) at [0x313d78:0x313e34] in 11621593 ns W/InputManagerService( 60): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43f66a10 I/dalvikvm( 239): Wrote stack traces to '/data/anr/traces.txt' I/dalvikvm( 263): Wrote stack traces to '/data/anr/traces.txt' etc...

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • Django unable to update model

    - by user292652
    i have the following function to override the default save function in a model match def save(self, *args, **kwargs): if self.Match_Status == "F": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1) if self.Winner !="": Team.objects.filter(pk=self.Winner.id).update(Win=F('Win')+1, Points=F('Points')+3) else: return if self.Match_Status == "D": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) super(Match, self).save(*args, **kwargs) I am able to save the match model just fine but Team model does not seem to be updating at all and no error is being thrown. am i missing some thing here ?

    Read the article

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • Circular Dependency Solution

    - by gfoley
    Our current project has ran into a circular dependency issue. Our business logic assembly is using classes and static methods from our SharedLibrary assembly. The SharedLibrary contains a whole bunch of helper functions, such as a SQL Reader class, Enumerators, Global Variables, Error Handling, Logging and Validation. The SharedLibrary needs access to the Business objects, but the Business objects need access to SharedLibrary. The old developers solved this obvious code smell by replicating the functionality of the business objects in the shared library (very anti-DRY). I've spent a day now trying to read about my options to solve this but i'm hitting a dead end. I'm open to the idea of architecture redesign, but only as a last resort. So how can i have a Shared Helper Library which can access the business objects, with the business objects still accessing the Shared Helper Library?

    Read the article

  • Environment naming standards in software development?

    - by Marcus_33
    My project is currently suffering from environment naming issues. Different people have different assumptions as to what environments should be named or what the names designate, and it's causing confusion when discussing them. I've done a bit of research and I haven't found any standards out there. The terms include "Local", "Sand", "Dev", "Test", "User", "QA", "Staging" and "Prod" (plus a few more that different people have asked about) I'm not looking for just opinions, though if there's one out there that "everyone" has I'll take it - I'm trying to find definitions advanced by some sort of authority, even if it's unofficial. Here's the environments we currently use: Environment on the developer's PC Shared Environment where developers directly upload code to self-test Shared Environment where standards and functionality are tested by QA people Shared Environment where completed and QA-checked code is approved by project requesters Environment that mirrors the final environment as a final check and to prepare for deployment Final Environment where code is in use I know what I'd call them, but is there some sort of standard on this? Thanks in advance.

    Read the article

  • Mount exFAT partition in virtual guest machine

    - by Alain Jacomet
    I have a real Ubuntu 12.10 installation being virtualized under a Windows 8 host, by using the VBoxManage.exe internalcommands createrawvmdk method. I'm trying to mount an exFAT partition into the virtualized machine (which is the partition of shared work files), but even though I have fuse-exfat installed, and the partition is perfectly mountable while booting entirely into Ubuntu, I can't mount it while virtualizing it. 1- If I make a full vmdk image of the HDD, including all partitions, Ubuntu 12.10 "sees" the partition, and trying to mount it throws this error: Image: http://i.stack.imgur.com/AyUSn.png 2- If I make a machine with only the linux partitions, + the exFAT partition. Again Ubuntu "sees" the partition and the result is: Error: fsync failed Image: http://i.stack.imgur.com/u4SkC.png 3- If I make a machine with only the linux partitions, and try to mount it, Ubuntu doesn't "see" the partition, and I get this error: Image: i.stack.imgur.com/q1hz5.png I've tried using the VirtualBox' "Shared Folders" functionality but even though I install the "Guest Additions", the system doesn't seem to recognize the shared folder: Image: i.stack.imgur.com/yLU0E.png Help? Thanks!

    Read the article

  • Best way to access database from android

    - by Brandon Delany
    I am working on a Android app and I have a dilemma. I have a list of Objects. I have to update each of these objects with a database. I have 2 methods: Method 1: I can loop through the Objects. For each object I can connect to the server, update it, and then move on to the next Object, and so forth. Method 2: I can store the Objects in a list, send the whole list to the server, update it on the server side, then return a list of updated objects. My questions are: Which method is faster? Which method is easier on the phone's battery? By the way, Method 1 is easier for me to code :). Thank you.

    Read the article

  • I have 20 Ubuntu 12.04 LTS machines Some are unable to network with other machines , they all have same workgroup viz. Ubuntu

    - by Gaurang Agrawal
    During installation I updated my workgroup to "Workgroup" , after installation I changed it to ubuntu as I was unable to access computers in network . What changes do I need to make in samba configuration ? I don't know if this is related , shared@shared:~$ testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] workgroup = UBUNTU server string = %h server (Samba, Ubuntu) encrypt passwords = No map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = Enter\snew\s\spassword:* %n\n Retype\snew\s\spassword:* %n\n password\supdated\ssuccessfully . username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 name resolve order = bcast host dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers smbclient -L 192.168.1.108 Enter shared's password: Connection to 192.168.1.108 failed (Error NT_STATUS_HOST_UNREACHABLE)

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Chaning coding style due to Android GC performance, how far is too far?

    - by Benju
    I keep hearing that Android applications should try to limit the number of objects created in order to reduce the workload on the garbage collector. It makes sense that you may not want to created massive numbers of objects to track on a limited memory footprint, for example on a traditional server application created 100,000 objects within a few seconds would not be unheard of. The problem is how far should I take this? I've seen tons of examples of Android applications relying on static state in order supposedly "speed things up". Does increasing the number of instances that need to be garbage collected from dozens to hundreds really make that big of a difference? I can imagine changing my coding style to now created hundreds of thousands of objects like you might have on a full-blown Java-EE server but relying on a bunch of static state to (supposedly) reduce the number of objects to be garbage collected seems odd. How much is it really necessary to change your coding style in order to create performance Android apps?

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • User script at logout

    - by GUI Junkie
    The problem: I'm sharing a directory with my wife. I've placed us both in a 'shared' group and the directory belongs to the 'shared' group as well. Whenever one of us creates a file, this file belongs to user:user, instead of user:shared... The solution: I can do sudo chown, but my wife can't. So, I want to run a script when I logout of the session. If I understand correctly, the startup scripts go in /etc/init.d/ and the runlevel scripts go /etc/rc0.d/ where 0 is the runlevel (0-6). Do the runlevel scripts execute only on exit/logout? Do these depend on the user, that is, I'd like to run it only for my user (not so important in this case, mind)? Should I place the script somewhere else? Also, I imagine that the script will be run by root, so there's no need for sudo within the script, is that correct?

    Read the article

  • Missing feature in Hyper-V from Virtual PC

    - by Kevin Shyr
    One thing I really miss is the ability to create shared folder between host and guest.  Virtual PC does this well, you can create Shared Folder to be used every time, or just this one.  I have read some posts on how to do this.  Some people suggest using ISO Creator to package up the files and mount the image to DVD drive, but what I need is truly a "shared" environment, so I'm currently looking into creating Virtual switch and creating an internal network between the host and guest.  Let's see how that works out. I would have loved to give Virtual SAN Manager a try, but I don't have a local Fibre Channel to set one up. I guess this might be an extension to my original post:  http://geekswithblogs.net/LifeLongTechie/archive/2011/05/05/windows-virtual-pc-vs.-hyper-v-virtual-machines-vs.-windows-virtual.aspx

    Read the article

  • Forcing a method to be non-transactional in JPA (Eclipselink)

    - by rhinds
    Hi, I am developing an application using Eclipselink and as part of the app I need to be able to manipulate some of the objects which involves changing data without it being persisted to the database (i merging/changing objects for some batch generation processes). I am reluctant to change the data in the Entity objects, as there is a risk that even though i have not marked the methods as @Transactional, this method could in the future be inadvertantly called from within a transactional method and these changes could be persisted. So my question is, is there anyway to get around this? Such as force a method to always be non-transactional regardless; terminate any transactionality as soon as the method is started; etc. I know there is a .detach() method that can detach the objects from the Entity Manager, however, there are many objects and this seems like a potentially error prone fail-safe on my code.

    Read the article

  • How to make my view better to save Django

    - by user558251
    Hy guys sorry for this post but i need help with my application, i need optimize my view. I have 5 models, how i can do this? def save(request): # get the request.POST in content if request.POST: content = request.POST dicionario = {} # create a dict to get the values in content for key,value in content.items(): # get my fk Course.objects if key == 'curso' : busca_curso = Curso.objects.get(id=value) dicionario.update({key:busca_curso}) else: dicionario.update({key:value}) #create the new teacher Professor.objects.create(**dicionario) my questions are? 1 - How i can do this function in a generic way? Can I pass a variable in a %s to create and get? like this way ? foo = "Teacher" , bar = "Course" def save(request, bar, foo): if request post: ... if key == 'course' : get_course = (%s.objects.get=id=value) %bar ... (%s.objects.create(**dict)) %foo ??? i tried do this in my view but don't work =/, can somebody help me to make this work ? Thanks

    Read the article

  • Access samba shares befor login

    - by everlearnin
    I installed Ubuntu 12.10 on an old Compaq C300. I want to use it to share my movies over lan so that the rest of my family can watch without hi-jacking my pc. I installed samba and shared the Public folder under the Home folder of my user account. But I can only access this folder from my Win 7 pc when I am logged in. If I log out or restart without logging in then I cannot access the shared folder. I am used to the Windows service that starts at boot making shared files available over the network before the user has logged in. How can I accomplish this in Ubuntu?

    Read the article

  • 2 (or more) ComboBoxes dependent on each other

    - by Mcad001
    Hi, I have an Organisation entity and a Region entity. An object of type Organisation can have one or more Region objects connected to it, thus I have a foreign key in my Region entity to the Organisation Entity. The Organisation and Region objects are pulled from my database using WCF RIA and entity framework. I want to put the Organisation objects in one ComboBox and the Region objects in another ComboBox, and when selecting an organsation having the ComboBox for Region objects automatically only showing regions that are connected to the selected organisation. Should be pretty basic, but the way I've designed it right now it doesnt work at all. So, any hint to how I can achive this? A simple simple codeexample is much appreciated! (I'm using SL4,WCF RIA MVVM)

    Read the article

  • a couple of Makefile issues

    - by user1623249
    I've got this Makefile: CFLAGS = -c -Wall CC = g++ EXEC = main SOURCES = main.cpp listpath.cpp Parser.cpp OBJECTS = $(SOURCES: .cpp=.o) EXECUTABLE = tp DIR_SRC = /src/ DIR_OBJ = /obj/ all: $(SOURCES) $(OBJECTS) $(EXECUTABLE): $(OBJECTS) $(CC) $(CFLAGS) $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm $(OBJECTS) $(EXECUTABLE) Note this: I'm in the directory "." which contains the makefile The folder "./src" EXISTS, and has all the .h and .cpp files The folder "./obj" doesn't exist, I want makefile to create it and put all the .o there The error I get is: No rules to build "main.cpp", necessary for "all". Stopping. Help!

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • Domain entities into (ASP.NET) Session, or better some kind of DTOs?

    - by Robert
    Currently we put Domain Objects into our ASP.NET Sessions. Now we considering moving from InProc sessions to state server. This requires that all objects inside session are serializable. Instead to annotate all objects with the [Serializable] attribute, we thought about creating custom-session objects (DTO Session Objects?), which only contain the information we need: CONS: Entities must be reloaded, which requires additional DB round-trips PROS: Session State is smaller Session information is more specific (could be a CON) No unneeded annotation of Domain-Entities What do you think? Should we use some kind of DTOs to store inside the session, or should we stick with god old entities?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >