Search Results

Search found 12848 results on 514 pages for 'multi core'.

Page 397/514 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Why I switch from Asana.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/24/why-i-switch-from-asana.com.aspxI used Asana.com from 1-2 years. have nice experience to use it. it’s not so easy. When I started using it it’s make many confusion. Now I switch from it.   When I first time see I really didn’t understand how to make a private list. There is a icon on top click on it and make it private. After doing that I still not sure if this is working. There is a lot of confusion made that time. I discuss too much to figure out small small things. The UI is interesting but so hard to understand.  What I am looking for is just a list that I can hold private. I would like to share it only if I put them shared and put email address of person to hold them same list. Few days ago I see that My Win8 phone have a app that call Microsoft OneNote. The good thing of this MS app is that I can record my voice in the app. If someone want to make a list for future then he just need to say and this can be recorded.  This is awesome when you feel that Mobile keypad is just not so fast as a normal regular keyboard.   Google docs are another good option to handle this thing. Just make a word file and use it. share it with friend with many option. One best thing is this app have very simply UI then any other apps.   One more alternative is https://trello.com which you hear from joel on their blog http://www.joelonsoftware.com/items/2011/09/13.html There are many html5 and browser based, mobile based app. Many of them support multi platform feature. this means you can have them from PC to your Pocket. One good thing we all wanted is offline. if you are not online thing will be saved and push back to server when you will be online.   The biggest problem with some apps are they are attractive easy but hard to learn. Their one feature are not clearly defined what he does. This make frustration and confusion to user. When app are not simple to use people start stop trying to learn it. That’s all the problem I have with asana.com If you don’t want to try anything then what about Sticky Notes that is part of Windows 7. This app are still usable since you can store the text on it. If you know any good app to make a task list that provide access from tablet/mobile then put comment here. In the whole world of app there is a lot of app for doing this same thing differently. I mention few of them here. I hope this is nice to describe it.   Thanks for read my post.

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • Summit 2014 Registration Is Open

    - by KemButller
    Attention Oracle (employees) Field Team and Oracle JD Edwards Partners REGISTRATION IS NOW OPEN for Oracle's 5th Annual JD Edwards Summit - Monday, January 27th through Friday, January 31st, 2014, in Broomfield, Colorado. The theme of this year's Summit is "Success Through Continued Innovation”.  Our goals are to update you on our current and future product roadmap, new products, selling strategies for new prospects, growing the footprint in our JD Edwards install base, as well as providing a venue for networking.  The JD Edwards Summit is to the selling and servicing community what COLLABORATE is to the user community.  This is a MUST ATTEND event if you recommend, sell, implement and/or support the JD Edwards product, whether you are new to JD Edwards or a seasoned pro, an executive, account executive, in presales or in consulting.  The Summit promises a content-rich and unique networking experience for all attendees. Highlights include:  Monday afternoon kicks off the Summit with a variety of workshops as well as an afternoon preview of the Sponsor Showcase.  Start your networking at the Summit kickoff party Monday evening. Tuesday morning features several informative keynotes in the Summit General Assembly followed by key messages delivered in Super Sessions in the afternoon, focused on each of the JD Edwards community audiences. The educational offerings continue on Wednesday and Thursday with over 90 breakout sessions on topics spanning technology, applications (core JD Edwards, Edge, Fusion), Sales, Presales and Implementation. Friday concludes with new workshops for the implementation community.A Attendees will be enriched with numerous opportunities to network with fellow partners and Oracle throughout the week.  Consider bringing your team and using this venue to hold your own organization kickoff meeting prior to or post Summit. Contact Sheila Ebbitt (Sheila.ebbitt@oracle-DOT-com) for further assistance with your planning.  Attendees will be charged a Summit fee of US$ 250. Online registration cut-off is January 17, 2014. All registration requests after that time will be processed on-site at the event with an attendee fee of US$ 500. Please contact Rene Chapman (rene.chapman@oracle-DOT-com) for information on sponsorship opportunities. For further details on the JD Edwards Summit including agenda, workshops, educational sessions, lodging,  sponsors and Summit registration, click here! Register now! This is going to be an awesome event! John Schiff Vice President JD Edwards Business Development 

    Read the article

  • django/python: is one view that handles two sibling models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. The view alters its behaviour based on input parameter action (better name would be mode), which can be of value "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate (but almost the same) logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Image, rel_model_tag, rel_object_id, mode) def video_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Video, rel_model_tag, rel_object_id, mode) def media_list(request, model, rel_model_tag, rel_object_id, mode): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['event'] = rel_object_id elif rel_model == Member: filter_params['members'] = rel_object_id elif rel_model == Crag: filter_params['crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if mode == 'edit': return media_edit_list(request, model, rel_model_tag, rel_object_id, context) return media_view_list(request, model, rel_model_tag, rel_object_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_edit_record = get_image_edit_record else: get_media_edit_record = get_video_edit_record media_list = [get_media_edit_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_edit_record(context['star_media'], rel_model_tag, rel_object_id) else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) def get_image_edit_record(image, rel_model_tag, rel_object_id): record = { 'url': image.image.url, 'name': image.title or image.filename, 'type': mimetypes.guess_type(image.image.path)[0] or 'image/png', 'thumbnailUrl': image.thumbnail_2.url, 'size': image.image.size, 'id': image.id, 'media_id': image.media_ptr.id, 'starUrl':reverse('image-star', kwargs={'image_id': image.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record def get_video_edit_record(video, rel_model_tag, rel_object_id): record = { 'url': video.embed_url, 'name': video.title or video.url, 'type': None, 'thumbnailUrl': video.thumbnail_2.url, 'size': None, 'id': video.id, 'media_id': video.media_ptr.id, 'starUrl': reverse('video-star', kwargs={'video_id': video.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.ForeignKey('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.ForeignKey('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • 13.10 upgrade dropping wifi [on hold]

    - by Daryl
    Almost a complete newb here. After my last upgrade from 12.04 to 13.10 my wifi now randomly drops. The only way I can get a signal back is a shutdown and restart otherwise it shows no network is even available to connect to. Had no problems until the upgrade. Any help would be appreciated. H/W path Device Class Description ==================================================== system h8-1534 (H2N64AA#ABA) /0 bus 2AC8 /0/0 memory 64KiB BIOS /0/4 processor AMD FX(tm)-6200 Six-Core Processor /0/4/5 memory 288KiB L1 cache /0/4/6 memory 6MiB L2 cache /0/4/7 memory 8MiB L3 cache /0/d memory 10GiB System Memory /0/d/0 memory DIMM Synchronous [empty] /0/d/1 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/2 memory 2GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/3 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/100 bridge RD890 PCI to PCI bridge (external gfx0 port B) /0/100/0.2 generic RD990 I/O Memory Management Unit (IOMMU) /0/100/2 bridge RD890 PCI to PCI bridge (PCI express gpp port B) /0/100/2/0 display Turks PRO [Radeon HD 7570] /0/100/2/0.1 multimedia Turks/Whistler HDMI Audio [Radeon HD 6000 Series] /0/100/5 bridge RD890 PCI to PCI bridge (PCI express gpp port E) /0/100/5/0 bus TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller /0/100/11 storage SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode] /0/100/12 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/12.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/13 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/13.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/14 bus SBx00 SMBus Controller /0/100/14.2 multimedia SBx00 Azalia (Intel HDA) /0/100/14.3 bridge SB7x0/SB8x0/SB9x0 LPC host controller /0/100/14.4 bridge SBx00 PCI to PCI Bridge /0/100/14.5 bus SB7x0/SB8x0/SB9x0 USB OHCI2 Controller /0/100/15 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) /0/100/15.1 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 1) /0/100/15.2 bridge SB900 PCI to PCI bridge (PCIE port 2) /0/100/15.2/0 wlan0 network RT3290 Wireless 802.11n 1T/1R PCIe /0/100/15.2/0.1 generic RT3290 Bluetooth /0/100/15.3 bridge SB900 PCI to PCI bridge (PCIE port 3) /0/100/15.3/0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller /0/100/16 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/16.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/101 bridge Family 15h Processor Function 0 /0/102 bridge Family 15h Processor Function 1 /0/103 bridge Family 15h Processor Function 2 /0/104 bridge Family 15h Processor Function 3 /0/105 bridge Family 15h Processor Function 4 /0/106 bridge Family 15h Processor Function 5 /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 1TB WDC WD1002FAEX-0 /0/1/0.0.0/1 volume 189MiB Windows FAT volume /0/1/0.0.0/2 /dev/sda2 volume 244MiB data partition /0/1/0.0.0/3 /dev/sda3 volume 931GiB LVM Physical Volume /0/2 scsi2 storage /0/2/0.0.0 /dev/cdrom disk DVD A DH16ACSHR /0/3 scsi6 storage /0/3/0.0.0 /dev/sdb disk SCSI Disk /0/3/0.0.1 /dev/sdc disk SCSI Disk /0/3/0.0.2 /dev/sdd disk SCSI Disk /0/3/0.0.3 /dev/sde disk MS/MS-Pro /0/3/0.0.3/0 /dev/sde disk /1 power Standard Efficiency I apologize for my newbness. I hope this is enough info for the hardware. Thanks Bruno for pointing out I needed to add more info. If I am lacking anything else please let me know and I'll post it.

    Read the article

  • django/python: is one view that handles two separate models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. It alters its behaviour based on input parameter action, which can be either "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Image, rel_model_tag, rel_object_id, action) def video_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Video, rel_model_tag, rel_object_id, action) def media_list(request, model, rel_model_tag, rel_object_id, action): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['media__event'] = rel_object_id elif rel_model == Member: filter_params['media__members'] = rel_object_id elif rel_model == Crag: filter_params['media__crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('media__date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if action == 'edit': return media_edit_list(request, model, rel_model_tag, rel_model_id, context) return media_view_list(request, model, rel_model_tag, rel_model_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_record = get_image_record else: get_media_record = get_video_record media_list = [get_media_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_record(star_media, rel_model_tag, rel_object_id) star_media['starred'] = True else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) def __unicode__(self): return self.title def get_absolute_url(self): return self.image.url if self.image else self.video.embed_url class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.OneToOneField('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.OneToOneField('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • The difference

    So with the CTP tools available, weve been building a few apps, just to get a feel for the tools and whats supported in the framework.  Whats been great is that everything is fairly familiar and consistent, largely to do with the .net framework and Microsofts focus on providing good tools.  Weve produced mobile applications, mostly in concept form, for Windows Phone Classic, iPhone and Android but never so quickly and not of such high quality and visual impression.  I attribute some of this obviously to our familiarity to the Microsoft platform and tools.  Though when you look at the designs our team has produced, it becomes clear that this is not just another mobile application container.                                                            The Metro design language implores content prominence with fluid motion and transitions, with a crisp font and easily organized features and services placement.  In addition to a purposeful right edge tease, where the intent is for users to discover new premium content and services.   The concept that enables this is called hubs, building application with hubs changes your thinking from a single mobile application task, to thinking creatively about a mobile experience. Its engaging to think of the other brands and industry verticals that will take advantage of this core feature.  Combine this with Windows Phone 7 live tiles, more on that later, and you have a recipe for a solid mobile services platform.                                                              This so much more fun and liberating than my icon on a gridDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I keep user input and rendering independent of the implementation environment?

    - by alex
    I'm writing a Tetris clone in JavaScript. I have a fair amount of experience in programming in general, but am rather new to game development. I want to separate the core game code from the code that would tie it to one environment, such as the browser. My quick thoughts led me to having the rendering and input functions external to my main game object. I could pass the current game state to the rendering method, which could render using canvas, elements, text, etc. I could also map input to certain game input events, such as move piece left, rotate piece clockwise, etc. I am having trouble designing how this should be implemented in my object. Should I pass references to functions that the main object will use to render and process user input? For example... var TetrisClone = function(renderer, inputUpdate) { this.renderer = renderer || function() {}; this.inputUpdate = input || function() {}; this.state = {}; }; TetrisClone.prototype = { update: function() { // Get user input via function passed to constructor. var inputEvents = this.inputUpdate(); // Update game state. // Render the current game state via function passed to constructor. this.renderer(this.state); } }; var renderer = function(state) { // Render blocks to browser page. } var inputEvents = {}; var charCodesToEvents = { 37: "move-piece-left" /* ... */ }; document.addEventListener("keypress", function(event) { inputEvents[event.which] = true; }); var inputUpdate = function() { var translatedEvents = [], event, translatedEvent; for (event in inputEvents) { if (inputEvents.hasOwnProperty(event)) { translatedEvent = charCodesToEvents[event]; translatedEvents.push(translatedEvent); } } inputEvents = {}; return translatedEvents; } var game = new TetrisClone(renderer, inputUpdate); Is this a good game design? How would you modify this to suit best practice in regard to making a game as platform/input independent as possible?

    Read the article

  • MOSSt 2010 Hosting :: Dialog Platform in SharePoint 2010 & How to Open the Edit Form Dialog for List Item

    - by mbridge
    One of the New User Interface Platforms in SharePoint 2010 is ‘The Dialog Platform’ A dialog is essentially a <div> which gets visible on demand and renders the HTML using a background overlay creating a modal dialog like user experience. We can show an existing div from within the page or a different page using a URL inside the dialogs. When we pass the URL to the dialog it looks for the Querystring parameter “IsDlg=1”. If this parameters exists than it would dynamically load the "/_layouts/styles/dlgframe.css” file. This file overrides the “s4-notdlg” class items as “display:none”, which means that all items with this class would not get displayed in Dialog Mode.  So if we go to the v4.master page we can see that this class is used by the Ribbon control to hide the ribbon when in dialog mode: How to open the Edit Form Dialog for List Item: In SharePoint 2010 The URL for opening the Edit Form of any list item looks like something like this : http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=1&IsDlg=1 ID is the list item row identifier and as discussed above the IsDlg is for the dialog mode. Now to open a dialog we need to use the SP.UI.ModalDialog.showModalDialog method from the ECMAScript Client Object model and pass in the url of the page, width & height of the dialog and also a callback function in case we want some code to run after the dialog is closed. <script type="text/javascript">          //Handle the DialogCallback callback               function DialogCallback(dialogResult, returnValue){               }             //Open the Dialog           function OpenEditDialog(id){             var options = { url:&quot;http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=&quot; + id + &quot;&amp;IsDlg=1&quot;,              width: 700,              height: 700,              dialogReturnValueCallback: DialogCallback              };             SP.UI.ModalDialog.showModalDialog(options);           } </script> The .js files for the ECMAScript Object Model (SP.js, SP.Core.js, SP.Ribbon.js, and SP.Runtime.js ) are installed in the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\LAYOUTS directory. Here is a good MSDN link explaining the Client Object Model Distribution and Deployment options available in SharePoint 2010 and this is the lowest costSharePoint 2010 Provider.

    Read the article

  • Insanity&ndash;Day 1

    - by D'Arcy Lussier
    Some people do those posts about “Here’s what I’m going to do to change my life”. I don’t really like those. I’m a “don’t tell me what you’re going to do, tell me what you’re doing/have done” type of guy. So while I could say how I’m going to change my life and be healthier and happier and thinner in 60 days, I’m just going to tell you how I’m progressing through the Insanity workout. Insanity is a workout-in-a-box. It’s a collection of DVDs that you work out to. Nothing new here, except that the program is intense. It’s core tenet of the program is intensity – you workout at an elevated heart rate for 3 minutes with a short break in between, as opposed to traditional interval training where you go hard for a short time and recover for a few minutes. The other aspect of it is commitment. There is a timetable to follow – you workout 6 days a week with one rest day. This isn’t meant to be a “pick it up whenever you feel like it” type of program. The videos themselves are kind of…I don’t want to say low quality, but not as polished as I expected. Maybe that’s what they were going for. By that I mean they show shots of cameramen and the production equipment during the workout. Otherwise, it’s the Insanity leader Shawn T. leading a group of pretty fit folks through this gruelling workout. And ultimately, those little production nit-picks are irrelevant compared to the actual workout. Holy crap. I haven’t done an aerobics class in like…ever. And watching the video before my actual workout, I thought “That doesn’t look too hard”. Believe me, it is. No weights, no machines, just various exercises done in circuits and with increasing speed. By the end of the workout, I was drenched. So that was day one. Some stats just so I can track it: I’m 286 lbs at my last weigh-in a week ago or so. Should be interesting to see what 60 days of this does! D

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • New laptop, Windows 8.1, attempting dual install. Ubuntu installer doesn't 'see' existing OS

    - by Flaminica
    Though I've used Ubuntu for a few years, I'm new to installation. Previously I had help and now I'm doing it alone (moved across the world). Windows 8.1 came preinstalled on my new laptop (Toshiba Satellite C70-A-17C - Core i5, 8 GB RAM, 750 GB HDD). I have already followed a few steps I found online to prepare for a dual install (with Ubuntu 14.04). I backed up Windows, created a bootable Ubuntu USB and DVD (just in case one didn't work), turned off fast boot and secure boot, and shrunk C:/. The new unallocated drive portion is 292.97 GB. After shrinking C:/, I restarted Windows a couple of times to make sure everything was working fine (it is). I then attempted to install with the Ubuntu live USB. However, the Ubuntu installer doesn't see that Windows 8.1 is already installed. I don't understand, and don't want to mess with Ubuntu partitioning when I don't know where the partitions will be created. My concern is that, if I go further with the installation process, Windows might be overwritten or compromised in some way. I then tried to reboot using the Ubuntu live DVD, thinking I might get a different result. However, I can't figure out how to make the laptop boot from the CD drive. I went into the BIOS and found no option there, either. Any help is very appreciated! EDIT: Looks like I can't link directly to each photo. Here is my album of screenshots: http://imgur.com/a/zChCo Here you can see that there's no option to boot from CD drive, only USB. Everything looks okay so far. I don't understand this. Ubuntu has not yet been installed. Unmounting partitions? (I chose 'no'.) Even though the laptop came pre-installed with Windows 8.1, the Ubuntu USB installer can't see it. I chose 'something else'. I need to pick and format partitions. I scrolled down and took a second shot to include all information. Completely lost and cancelled installation.

    Read the article

  • Need Help with partitions and such Dual Booting 13.10 w/ windows 7

    - by Aymax
    so I've spent the past couple days freaking out about this and looking for answers, and I decided to resort to asking on forums. probably should have before I wasted 2 entire days. so I am trying to DUAL-INSTALL Ubuntu 13.10 with my x64 Windows 7 home premium computer. I have 6 gb of ram, 1TB hard drive, and a 3.3GHZ dual core processer (just in case it matters). I've managed to figure some things out. I've burned the ubuntu files onto a DVD, and I have been able to successfully run it off the disk. I also shrunk my Windows partition by 120Gb and partitioned that for Ubuntu (all using the windows Disk Manager). Problems: When I turn my computer on with the DVD in the tray, the computer cant find windows. it flashes a screen real quick that says something about not being able to find an operating system, and then goes to "grub" and asks what i want to do with Ubuntu. this scares me, because I don't know if that means that I will not be able to boot windows if I install Ubuntu. The Ubuntu 13.10 installer does not detect my Windows operating system. I only have the options to Erase everything on my drive, or "something else." I choose that, which brings me to 3 I don't understand the partition table. I have no idea which drive im selecting to install stuff on, much less which one to select. I tried to tell by the amount of memory partitioned off, but none of the numbers seem to be accurate. Plus, all the names are dev/sda(#). I know Ubuntu knows the name of my partition, because on the sidebars it shows the names of the different drives, including the partition I made; so why don't they use the names? I have no idea what I'm going to be erasing. I've read that I should know which is which by the file system type, but they are all NTFS, including the one I made. my only other option was FAT, none for EXT2 or any of that like people said to do. My main concerns are that of accidentally erasing windows or not being able to access windows. any feasible solution is helpful, weather it helps me with the install or to make Ubuntu see windows. I realize this question has been asked much, but i have found no feasible answers so far. I am relatively new to this, and have never installed an operating system before, so I do not know most of the jargon. please keep it relatively simple, please. I am not a programmer. Thanks.

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Is Ubuntu recognizing and/or using my NVIDIA graphics card?

    - by user212860
    This is my first post here, and I'm pretty new to Ubuntu/Linux. I currently have no other OS except for Ubuntu 13.10. (I used to have Win7 until i got a new terabyte hard drive). My current PC build, if any of this helps: CPU: Intel i5 quad-core Graphics: NVIDIA GeForce GTX 650 RAM: 8 GB HDD: 1 TB SATA 3 Motherboard: MSi Z77 A-G41 OS Ubuntu 13.10 So I recently installed Ubuntu 13.10 and put Steam on it, and I'm seeing that my games run a lot slower than they did when I had Win7. I figured it was a graphics problem, so I checked System Settings Details Overview. It says in "Graphics" that I have "Gallium 0.4 on NVE7" (don't really know what that is). Does this mean that Ubuntu is not using my graphics card? In System Settings Software & Updates Additional Drivers, it clearly shows like this: NVIDIA Corporation: GK107 [GeForce GTX 650] -This device is using an alternative driver (And then it shows a list of drivers that I can switch back and forth to) So this is a bit confusing. In Software and Updates, it clearly shows that I have my NVIDIA card installed, and that I have a driver selected for it. But in System Settings, it shows I have some Gallium 0.4 thing. I had done a bit of research, and ended up typing command: "lspci|grep VGA" in the Terminal. It showed this in response: VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) The Terminal seems to recognize my graphics card. What it looks like to me, is that I don't have the proper driver, and I might be using my CPU's integrated graphics. When I switch around which driver I am using in that list, it still does not see my card in System Settings. Some of the drivers in the list give me some sort of OpenGL error when I try to run a game. It might just be that my games are running slow because the game developers have not optimized it for Ubuntu that well. However, that still doesn't take away from the fact that System Settings is not showing my NVIDIA card. TL;DR Version: How do I know if my video card is being recognized/used? If my video card is not being used, what is the best way fix that? Please make your answers easy to understand. I do not mind wordy responses, as long as I can follow what you're saying. Any help would be greatly appreciated! Thanks, Jabber5

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Entity Framework - Condition on one to many join (Lambda)

    - by nirpi
    Hi, I have 2 entities: Customer & Account, where a customer can have multiple accounts. On the account, I have a "PlatformTypeId" field, which I need to condition on (multiple values), among other criterions. I'm using Lambda expressions, to build the query. Here's a snippet: var customerQuery = (from c in context.CustomerSet.Include("Accounts") select c); if (criterions.UserTypes != null && criterions.UserTypes.Count() > 0) { List<short> searchCriterionsUserTypes = criterions.UserTypes.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(CommonDataObjects.LinqTools.BuildContainsExpression<Customer, short>(c => c.UserTypeId, searchCriterionsUserTypes)); } // Other criterions, including the problematic platforms condition (below) var customers = customerQuery.ToList(); I can't figure out how to build the accounts' platforms condition: if (criterions.Platforms != null && criterions.Platforms.Count() > 0) { List<short> searchCriterionsPlatforms = criterions.Platforms.Select(i => (short)i).ToList(); customerQuery = customerQuery.Where(c => c.Accounts.Where(LinqTools.BuildContainsExpression<Account, short>(a => a.PlatformTypeId, searchCriterionsPlatforms))); } (The BuildContainsExpression is a method we use to build the expression for the multi-select) I'm getting a compilation error: The type arguments for method 'System.Linq.Enumerable.Where(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. Any idea how to fix this? Thanks, Nir.

    Read the article

  • uploadify scriptData problem

    - by elpaso66
    Hi, I'm having problems with scriptData on uploadify, I'm pretty sure the config syntax is fine but whatever I do, scriptData is not passed to the upload script. I tested in both FF and Chrome with flash v. Shockwave Flash 9.0 r31 This is the config: $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : '/media/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': 'e1b552afde044bdd188ad51af40cfa8e'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : '/media/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); I checked that other values (file name) are passed correctly but session_key is not. This is the decorator code from django-filebrowser, you can see it checks for request.POST.get('session_key'), the problem is that request.POST is empty. def flash_login_required(function): """ Decorator to recognize a user by its session. Used for Flash-Uploading. """ def decorator(request, *args, **kwargs): try: engine = __import__(settings.SESSION_ENGINE, {}, {}, ['']) except: import django.contrib.sessions.backends.db engine = django.contrib.sessions.backends.db print request.POST session_data = engine.SessionStore(request.POST.get('session_key')) user_id = session_data['_auth_user_id'] # will return 404 if the session ID does not resolve to a valid user request.user = get_object_or_404(User, pk=user_id) return function(request, *args, **kwargs) return decorator

    Read the article

  • How should I distribute a pre-built perl module, and what version of perl do I build for?

    - by Mike Ellery
    This is probably a multi-part question. Background: we have a native (c++) library that is part of our application and we have managed to use SWIG to generate a perl wrapper for this library. We'd now like to distribute this perl module as part of our application. My first question - how should I distribute this module? Is there a standard way to package pre-built perl modules? I know there is ppm for the ActiveState distro, but I also need to distribute this for linux systems. I'm not even sure what files are required to distribute, but I'm guessing it's the pm and so files, at a minimum. My next question - it looks like I might need to build my module project for each version of perl that I want to support. How do I know which perl versions I should build for? Are there any standard guidelines... or better yet, a way to build a package that will work with multiple versions of perl? Sorry if my questions make no sense - I'm fairly new to the compiled module aspects of perl. CLARIFICATION: the underlying compiled source is proprietary (closed source), so I can't just ship source code and the appropriate make artifacts for the package. Wish I could, but it's not going to happen in this case. Thus, I need a sane scheme for packaging prebuilt binary files for my module.

    Read the article

  • Why isn't uploadify and asp.net mvc 2 playing nice for me?

    - by Paperino
    First of all, I've checked out all the SO threads, and googled my brains out. I must be missing something obvious. I'd really appreciate some help! This is what I've got. UploadController.cs using System.Web; using System.Web.Mvc; namespace NIMDocs.Controllers { public class UploadController : Controller { public ActionResult Index() { return View(); } public string Procssed(HttpPostedFileBase FileData) { // DO STUFF return "DUHR I AM SMART"; } } } Index for Upload <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>ITS A TITLE </title> <script src="../../Content/jqueryPlugins/uploadify/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="../../Content/jqueryPlugins/uploadify/swfobject.js" type="text/javascript"></script> <script src="../../Content/jqueryPlugins/uploadify/jquery.uploadify.v2.1.0.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $('#uploadify').fileUpload({ 'uploader': '../../Content/jqueryPlugins/uploadify/uploadify.swf', 'script': '/Upload/Processed', 'folder': '/uploads', 'multi': 'true', 'buttonText': 'Browse', 'displayData': 'speed', 'simUploadLimit': 2, 'cancelImg': '/Content/Images/cancel.png' }); }); </script> </head> <body> <input type="file" name="uploadify" id="uploadify" /> <p><a href="javascript:jQuery('#uploadify').uploadifyClearQueue()">Cancel All Uploads</a></p> </body> </html> What am I missing here? I've tried just about every path permutation for uploadify's "uploader" option. Absolute path, '/' prefixed, etc.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >