Search Results

Search found 17016 results on 681 pages for 'ruby debug'.

Page 577/681 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Intel Centrino Wireless N 1000 doesn't work on a Lenovo Z560

    - by Timetraveler
    I upgraded my Ubuntu 11.04 to 11.10 and my Wifi has stopped working. I have a Lenovo Z560 that has Intel centrino wireless-N 1000. I have searched various threads having similar problems for a solution and have no success. The wlan0 is not even showing up in rfkill. Please help me find a solution. I am giving below the output of various debug commands. Thanks in advance. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" ----##uname -a Linux gurucharapathy-laptop 3.0.0-17-generic-pae #30-Ubuntu SMP Thu Mar 8 17:53:35 UTC 2012 i686 i686 i386 GNU/Linux ----##lspci -nnk | grep -iA2 net 05:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [8086:0084] Subsystem: Intel Corporation Centrino Wireless-N 1000 BGN [8086:1315] Kernel modules: iwlagn 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 ----##iwconfig lo no wireless extensions. eth0 no wireless extensions. ----##iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. ----##rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no ----##lsmod Module Size Used by rfcomm 38408 8 bnep 17923 2 parport_pc 32114 0 ppdev 12849 0 binfmt_misc 17292 1 snd_hda_codec_hdmi 31426 1 snd_hda_codec_conexant 52460 1 uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 28358 2 snd_hda_codec 91859 3 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec joydev 17393 0 snd_pcm 80435 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 i915 509554 9 drm_kms_helper 32889 1 i915 snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28932 2 snd_pcm,snd_seq drm 196290 5 i915,drm_kms_helper snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq mei 36466 0 mac80211 393421 0 snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 13575 0 intel_ips 17753 0 btusb 18160 2 i2c_algo_bit 13199 1 i915 soundcore 12600 1 snd bluetooth 148839 23 rfcomm,bnep,btusb cfg80211 172427 1 mac80211 psmouse 63474 0 serio_raw 12990 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm sparse_keymap 13658 1 ideapad_laptop wmi 18744 0 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 2 libahci 25761 1 ahci r8169 47200 0 ----##nm-tool NetworkManager Tool State: asleep Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unmanaged Default: no HW Address: 88:AE:1D:DE:5F:9C Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on ----##lshw -C network *-network UNCLAIMED description: Network controller product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d6400000-d6401fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:de:5f:9c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=192.168.0.100 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Using INotifyPropertyChanged in background threads

    - by digitaldias
    Following up on a previous blog post where I exemplify databinding to objects, a reader was having some trouble with getting the UI to update. Here’s the rough UI: The idea is, when pressing Start, a background worker process starts ticking at the specified interval, then proceeds to increment the databound Elapsed value. The problem is that event propagation is limeted to current thread, meaning, you fire an event in one thread, then other threads of the same application will not catch it. The Code behind So, somewhere in my ViewModel, I have a corresponding bethod Start that initiates a background worker, for example: public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; }while( true ); } Assuming that there is a property: public int ElapsedMs { get { return _elapsedMs; } set { if( _elapsedMs == value ) return; _elapsedMs = value; NotifyThatPropertyChanged( "ElapsedMs" ); } } The above code will not work. If you step into this code in debug, you will find that INotifyPropertyChanged is called, but it does so in a different thread, and thus the UI never catches it, and does not update. One solution Knowing that the background thread updates the ElapsedMs member gives me a chance to activate BackgroundWorker class’ progress reporting mechanism to simply alert the main thread that something has happened, and that it is probably a good idea to refresh the ElapsedMs binding. public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; // Listen for progress report events backgroundWorker.WorkerReportsProgress = true; // Tell the UI that ElapsedMs needs to update backgroundWorker.RunWorkerCompleted += ( sender, e ) => { NotifyThatPropertyChanged( "ElapsedMs" ) }; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; // report any progress ( sender as BackgroundWorker ).ReportProgress( 0 ); }while( true ); } What happens above now is that I’ve used the BackgroundWorker cross thread mechanism to alert me of when it is ok for the UI to update it’s ElapsedMs field. Because the property itself is being updated in a different thread, I’m removing the NotifyThatPropertyChanged call from it’s Set method, and moving that responsability to the anonymous method that I created in the Start method. This is one way of solving the issue of having a background thread update your UI. I would be happy to hear of other cross-threading mechanisms for working in a MCP/MVC/MVVM pattern.

    Read the article

  • Upgarde from Asp.Net MVC 1 to MVC 2 - how to and issues with JsonRequestBehavior

    - by Renso
    Goal Upgrade your MVC 1 app to MVC 2 Issues You may get errors about your Json data being returned via a GET request violating security principles - we also address this here. This post is not intended to delve into why the Json GET request is or may be an issue, just how to resolve it as part of upgrading from MVC1 to 2. Solution First remove all references from your projects to the MVC 1 dll and replace it with the MVC 2 dll. Now update your web.config file in your web app root folder by simply changing references to assembly="System.Web.Mvc, Version 1.0.0.0 to Version 2.0.0.0, there are a couple of references in your config file, here are probably most of them you may have:         <compilation debug="true" defaultLanguage="c#">             <assemblies>                        <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />             </assemblies>         </compilation>           <pages masterPageFile="~/Views/Masters/CRMTemplate.master" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validateRequest="False">             <controls>                 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />   Secondly, if you return Json objects from an ajax call via the GET method you ahve several options to fix this depending on your situation: 1. The simplest, as in my case I did this for an internal web app, you may simply do:             return Json(myObject, JsonRequestBehavior.AllowGet);   2. In Mvc if you have a controller base you could wrap the Json method with:         public new JsonResult Json(object data)         {             return Json(data, "application/json", JsonRequestBehavior.AllowGet);                    }   3. The most work would be to decorate your Actions with:         [AcceptVerbs(HttpVerbs.Get)]   4. Another tnat is also a lot of work that needs to be done to every ajax call returning Json is:                             msg = $.ajax({ url: $('#ajaxGetSampleUrl').val(), dataType: 'json', type: 'POST', async: false, data: { name: theClass }, success: function(data, result) { if (!result) alert('Failure to retrieve the Sample Data.'); } }).responseText;   This should cover all the issues you may run into when upgrading. Let me kow if you run into any other ones.

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • glGetActiveAttrib on Android NDK

    - by user408952
    In my code-base I need to link the vertex declarations from a mesh to the attributes of a shader. To do this I retrieve all the attribute names after linking the shader. I use the following code (with some added debug info since it's not really working): int shaders[] = { m_ps, m_vs }; if(linkProgram(shaders, 2)) { ASSERT(glIsProgram(m_program) == GL_TRUE, "program is invalid"); int attrCount = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTES, &attrCount)); int maxAttrLength = 0; GL_CHECKED(glGetProgramiv(m_program, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttrLength)); LOG_INFO("shader", "got %d attributes for '%s' (%d) (maxlen: %d)", attrCount, name, m_program, maxAttrLength); m_attrs.reserve(attrCount); GLsizei attrLength = -1; GLint attrSize = -1; GLenum attrType = 0; char tmp[256]; for(int i = 0; i < attrCount; i++) { tmp[0] = 0; GL_CHECKED(glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)); LOG_INFO("shader", "%d: %d %d '%s'", i, attrLength, attrSize, tmp); m_attrs.append(String(tmp, attrLength)); } } GL_CHECKED is a macro that calls the function and calls glGetError() to see if something went wrong. This code works perfectly on Windows 7 using ANGLE and gives this this output: info:shader: got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) info:shader: 0: 7 1 'a_Color' info:shader: 1: 10 1 'a_Position' But on my Nexus 7 (1st gen) I get the following (the errors are the output from the GL_CHECKED macro): I/testgame:shader(30865): got 2 attributes for 'static/simplecolor.glsl' (3) (maxlen: 11) E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 0: -1 -1 '' E/testgame:gl(30865): 'glGetActiveAttrib(m_program, GLuint(i), sizeof(tmp), &attrLength, &attrSize, &attrType, tmp)' failed: INVALID_VALUE [jni/src/../../../../src/Game/Asset/ShaderAsset.cpp:50] I/testgame:shader(30865): 1: -1 -1 '' I.e. the call to glGetActiveAttrib gives me an INVALID_VALUE. The opengl docs says this about the possible errors: GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. This is not the case, I added an ASSERT to make sure glIsProgram(m_program) == GL_TRUE, and it doesn't trigger. GL_INVALID_OPERATION is generated if program is not a program object. Different error. GL_INVALID_VALUE is generated if index is greater than or equal to the number of active attribute variables in program. i is 0 and 1, and the number of active attribute variables are 2, so this isn't the case. GL_INVALID_VALUE is generated if bufSize is less than 0. Well, it's not zero, it's 256. Does anyone have an idea what's causing this? Am I just lucky that it works in ANGLE, or is the nvidia tegra driver wrong?

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • jrunscript as a cross platform scripting environment

    - by user12798506
    ?????????????????????????????????????????????????????????sh????????????UNIX???????????????????sh???????????????????????????????????????????Windows????????????????? sh??????????????find?grep?sed?awk???Windows??????????????????????????????????????????????????????????????????????????????????????????????Windows???Cygwin????????????sh??????Windows??????????????Cygwin????????????????????????????????????????????JDK?????jrunscript?????JavaScript???????????????????????1?????????jrunscript??????????????????? Windows???UNIX??????????????????????? find?grep?sed?awk?????????sh???????????????Windows Script Host??????? Java????????????? ??????????????????????????????????????????????????????????(?????????????????????????????????????????) ?????????????JDK 6??????????????????????????PC????????????????JDK 6?PC????????????????????????????????????JDK????????????????????????????????????????jrunscript?????????????????????????? ?????jrunscript????JavaScript?????????????????????????????????????????? 1) Windows???UNIX????????????????? ?????????????????????????????????????????JavaScript???mytool.js???????????????????????jrunscript???????????UNIX????sh???????Windows????bat????????????????????? mytool.sh (UNIX?): #!/bin/sh bindir=$(cd $(dirname $0) && pwd) case "`uname`" in CYGWIN*) bindir=`cygpath -w "$bindir"` ;; esac jrunscript "${bindir}/mytool.js" $* mytool.bat (Windows?): @echo off set bindir=%~dp0 jrunscript "%bindir%mytool.js" %* UNIX??sh????????Cygwin???????????????????????????????????????????js??????????????UNIX?Windows??????????????????????????????? 2) jrunscript??cat, cp, find?grep?????? jrunscript???UNIX?????????????????????????????????? jrunscript JavaScript built-in functions ????UNIX??sh?????????????????????UNIX?????????????????????????????????????????src??????????java????????????enum???????java?????????????????????????????????????????????? find('src', '.*.java', function(f) { grep('enum', f); }); ???????UNIX?????????????????????????????????????????????????????????????????????????????????????????cp(from, to)??????????????????????????????????????????UNIX??????????? $ cp -r src/* tmp/ ?????????????????????????????????????????find()???????cp -r????????·?????????????????????? function cpr(fromdir, todir, pattern) { if (pattern == undefined) { pattern = ".*"; } var frdir = pathToFile(fromdir).getCanonicalPath(); find(fromdir, pattern, function(f) { // relative dir of file f from 'fromdir'. var relative = f.getParentFile().getCanonicalPath().substring(frdir.length() + 1); var dstdir = pathToFile(todir + "/" + relative); if (!dstdir.exists()) { // Create the destination dir for file f. mkdirs(dstdir); } // Copy file f to 'dstdir'. cp(f, dstdir + "/" + f.getName()); }); } java?????I/O?API??Windows?????????????"/"??????????????????????????????UNIX?Windows?????????????? ????????????exec(cmd)?????????jar???????????????????????????????????????????? $ jrunscript js> exec("jar xvf example.jar") META-INF/ ?????????????µ???B META-INF/MANIFEST.MF ???W?J???????µ???B com/ ?????????????µ???B com/example/ ?????????????µ???B com/example/Bar.class ???W?J???????µ???B com/example/dummy/ ?????????????µ???B com/example/dummy/dummy.txt ?????o???????µ???B com/example/dummy.properties ?????o???????µ???B com/example/Foo.class ???W?J???????µ???B ???exec()?????????????????????????????????????????????????????????????????Windows????????????I/O??????????????????????????????????BAT????????? errmsg.bat: for /L %%i in (1,1,50) do echo "Error Message count = %%i" 1&2 jrunscript??exec()???????????????18??????????????????????????????????? C:\tmp>jrunscript -e "exec('errmsg.bat')" C:\tmp>for /L %i in (1 1 100) do echo "Error Message count = %i" 1>&2 C:\tmp>echo "Error Message count = 1" 1>&2 : C:\tmp>echo "Error Message count = 18" 1>&2 ? ??? ???????????exec()?????????????????????????????????????????????????????????????????DataInputStream???????????????????????? $ jrunscript js this["exec"].toString() function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var inp = new DataInputStream(process.getInputStream()); var line = null; while ((line = inp.readLine()) != null) { println(line); } process.waitFor(); $exit = process.exitValue(); } ?????????????????????????????????????????????????????exec()???????????????exec()?????????????????????????????exec()??????? function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var stdworker = new java.lang.Runnable( {run: function() { cat(process.getInputStream()); }}); var errworker = new java.lang.Runnable( {run: function() { cat(process.getErrorStream()); }}); new java.lang.Thread(stdworker).start(); new java.lang.Thread(errworker).start(); return proc.waitFor(); } ???????????????????cat()???????????cat()?InputStreamReader?????????????????????????????????????????????????? 3) JavaScript???????????????? JavaScript?Java???????????????????????JavaScript????????????Ruby?Groovy?Scala???????????????????????????????????????????????10MB?????????????????????????????????????JavaScript????????????????????KB?????????????MB?JAR??????????????????????????JRE?JDK?????????????????????????????????????????

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Unable to Mount an external hard drive (NTFS)

    - by Mediterran81
    Ubuntu 11.10. When I plug my external Drive Western Digital MyPassport (500Go NTFS) I named WD. I get the following error: Unable to mount WD Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on /media/WD mount failed I have no problem with the internal NTFS partitions that auto-mounts on startup (ntfs-config does that). If I plug the WD before I boot Ubuntu, upon login, it's recognized and I can access without no problem. But if I remove it using (Safely remove) and then replug it, I get the error above. Here is my fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda5 : UUID=24540d0f-5803-493c-ace9-e3b3c0cedb26 / ext4 errors=remount-ro 0 1 #Entry for /dev/sda3 : UUID=E4C43F7EC43F51D2 /media/OS ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda2 : UUID=6A0070F10070C61B /media/RECOVERY ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=EA6854D268549F5F /media/WD ntfs-3g defaults,nosuid,nodev,locale=en_US.UTF-8 0 0 #Entry for /dev/sda6 : UUID=ed077c52-c50e-406c-9120-9cb6f86ec204 none swap sw 0 0 Here is my mtab /dev/sda5 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /media/OS fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sda2 /media/RECOVERY fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sdb1 /media/WD fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/hanine/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=hanine 0 0 Appearently it cannot be mounted because upon login, it finds that it is already mounted. Some sort of conflict. Does anyone have a clue on how to solve this. Thanks.

    Read the article

  • Quick run through of the WP7 Developer Tools January 2011

    - by mbcrump
    In case you haven’t heard the latest WP7 Developers Tool update was released yesterday and contains a few goodies. First you need to go and grab the bits here. You can install them in any order, but I installed the WindowsPhoneDeveloperResources_en-US_Patch1.msp first. Then the VS10-KB2486994-x86.exe. They install silently. In other words, you would need to check Programs and Features and look in Installed Updates to see if they installed successfully. Like the screenshot below: Once you get them installed you can try out a few new features. Like Copy and Paste. Just fire up your application and put a TextBox on it and Select the Text and you will have the option highlighted in red above the text. Once you select it you will have the option to paste it. (see red rectangle below). Another feature is the Windows Phone Capability Detection Tool – This tool detects the phone capabilities used by your application. This will prevent you from submitting an app to the marketplace that says it uses x feature but really does not. How do you use it? Well navigate out to either directory: %ProgramFiles%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect %ProgramFiles (x86)%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect and run the following command: CapabilityDetection.exe Rules.xml YOURWP7XAPFILEOUTPUTDIRECTORY So, in my example you will see my app only requires the ID_CAP_MICROPHONE. Let’s see what the WmAppManifest.xml says in our WP7 Project: Whoa! That’s a lot of extra stuff we don’t need. We can delete unused capabilities safely now. Some of the other fixes are: (Copied straight from Microsoft) Fixes a text selection bug in pivot and panorama controls. In applications that have pivot or panorama controls that contain text boxes, users can unintentionally change panes when trying to copy text. To prevent this problem, open your application, recompile it, and then resubmit it to the Windows Phone Marketplace. Windows Phone Connect Tool – Allows you to connect your phone to a PC when Zune® software is not running and debug applications that use media APIs. For more information, see How to: Use the Connect Tool. Updated Bing Maps Silverlight Control – Includes improvements to gesture performance when using Bing™ Maps Silverlight® Control. Windows Phone Developer Tools Fix allowing deployment of XAP files over 64 MB in size to physical phone devices for testing and debugging. That’s pretty much it. Thanks again for reading my blog!  Subscribe to my feed CodeProject

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • A Reusable Builder Class for .NET testing

    - by Liam McLennan
    When writing tests, other than end-to-end integration tests, we often need to construct test data objects. Of course this can be done using the class’s constructor and manually configuring the object, but to get many objects into a valid state soon becomes a large percentage of the testing effort. After many years of painstakingly creating builders for each of my domain objects I have finally become lazy enough to bother to write a generic, reusable builder class for .NET. To use it you instantiate a instance of the builder and configuring it with a builder method for each class you wish it to be able to build. The builder method should require no parameters and should return a new instance of the type in a default, valid state. In other words the builder method should be a Func<TypeToBeBuilt>. The best way to make this clear is with an example. In my application I have the following domain classes that I want to be able to use in my tests: public class Person { public string Name { get; set; } public int Age { get; set; } public bool IsAndroid { get; set; } } public class Building { public string Street { get; set; } public Person Manager { get; set; } } The builder for this domain is created like so: build = new Builder(); build.Configure(new Dictionary<Type, Func<object>> { {typeof(Building), () => new Building {Street = "Queen St", Manager = build.A<Person>()}}, {typeof(Person), () => new Person {Name = "Eugene", Age = 21}} }); Note how Building depends on Person, even though the person builder method is not defined yet. Now in a test I can retrieve a valid object from the builder: var person = build.A<Person>(); If I need a class in a customised state I can supply an Action<TypeToBeBuilt> to mutate the object post construction: var person = build.A<Person>(p => p.Age = 99); The power and efficiency of this approach becomes apparent when your tests require larger and more complex objects than Person and Building. When I get some time I intend to implement the same functionality in Javascript and Ruby. Here is the full source of the Builder class: public class Builder { private Dictionary<Type, Func<object>> defaults; public void Configure(Dictionary<Type, Func<object>> defaults) { this.defaults = defaults; } public T A<T>() { if (!defaults.ContainsKey(typeof(T))) throw new ArgumentException("No object of type " + typeof(T).Name + " has been configured with the builder."); T o = (T)defaults[typeof(T)](); return o; } public T A<T>(Action<T> customisation) { T o = A<T>(); customisation(o); return o; } }

    Read the article

  • Finding furthermost point in game world

    - by user13414
    I am attempting to find the furthermost point in my game world given the player's current location and a normalized direction vector in screen space. My current algorithm is: convert player world location to screen space multiply the direction vector by a large number (2000) and add it to the player's screen location to get the distant screen location convert the distant screen location to world space create a line running from the player's world location to the distant world location loop over the bounding "walls" (of which there are always 4) of my game world check whether the wall and the line intersect if so, where they intersect is the furthermost point of my game world in the direction of the vector Here it is, more or less, in code: public Vector2 GetFurthermostWorldPoint(Vector2 directionVector) { var screenLocation = entity.WorldPointToScreen(entity.Location); var distantScreenLocation = screenLocation + (directionVector * 2000); var distantWorldLocation = entity.ScreenPointToWorld(distantScreenLocation); var line = new Line(entity.Center, distantWorldLocation); float intersectionDistance; Vector2 intersectionPoint; foreach (var boundingWall in entity.Level.BoundingWalls) { if (boundingWall.Intersects(line, out intersectionDistance, out intersectionPoint)) { return intersectionPoint; } } Debug.Assert(false, "No intersection found!"); return Vector2.Zero; } Now this works, for some definition of "works". I've found that the further out my distant screen location is, the less chance it has of working. When digging into the reasons why, I noticed that calls to Viewport.Unproject could result in wildly varying return values for points that are "far away". I wrote this stupid little "test" to try and understand what was going on: [Fact] public void wtf() { var screenPositions = new Vector2[] { new Vector2(400, 240), new Vector2(400, -2000), }; var viewport = new Viewport(0, 0, 800, 480); var projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, viewport.Width / viewport.Height, 1, 200000); var viewMatrix = Matrix.CreateLookAt(new Vector3(400, 630, 600), new Vector3(400, 345, 0), new Vector3(0, 0, 1)); var worldMatrix = Matrix.Identity; foreach (var screenPosition in screenPositions) { var nearPoint = viewport.Unproject(new Vector3(screenPosition, 0), projectionMatrix, viewMatrix, worldMatrix); var farPoint = viewport.Unproject(new Vector3(screenPosition, 1), projectionMatrix, viewMatrix, worldMatrix); Console.WriteLine("For screen position {0}:", screenPosition); Console.WriteLine(" Projected Near Point = {0}", nearPoint.TruncateZ()); Console.WriteLine(" Projected Far Point = {0}", farPoint.TruncateZ()); Console.WriteLine(); } } The output I get on the console is: For screen position {X:400 Y:240}: Projected Near Point = {X:400 Y:629.571 Z:599.0967} Projected Far Point = {X:392.9302 Y:-83074.98 Z:-175627.9} For screen position {X:400 Y:-2000}: Projected Near Point = {X:400 Y:626.079 Z:600.7554} Projected Far Point = {X:390.2068 Y:-767438.6 Z:148564.2} My question is really twofold: what am I doing wrong with the unprojection such that it varies so wildly and, thus, does not allow me to determine the corresponding world point for my distant screen point? is there a better way altogether to determine the furthermost point in world space given a current world space location, and a directional vector in screen space?

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Drawing a texture at the end of a trace (crosshair?) UDK

    - by Dave Voyles
    I'm trying to draw a crosshair at the end of my trace. If my crosshair does not hit a pawn or static mesh (ex, just a skybox) then the crosshair stays locked on a certain point at that actor - I want to say its origin. Ex: Run across a pawn, then it turns yellow and stays on that pawn. If it runs across the skybox, then it stays at one point on the box. Weird? How can I get my crosshair to stay consistent? I've included two images for reference, to help illustrate. Note: The wrench is actually my crosshair. The "X" is just a debug crosshair. Ignore that. /// Image 1 /// /// Image 2 /// /*************************************************************************** * Draws the crosshair ***************************************************************************/ function bool CheckCrosshairOnFriendly() { local float CrosshairSize; local vector HitLocation, HitNormal, StartTrace, EndTrace, ScreenPos; local actor HitActor; local MyWeapon W; local Pawn MyPawnOwner; /** Sets the PawnOwner */ MyPawnOwner = Pawn(PlayerOwner.ViewTarget); /** Sets the Weapon */ W = MyWeapon(MyPawnOwner.Weapon); /** If we don't have an owner, then get out of the function */ if ( MyPawnOwner == None ) { return false; } /** If we have a weapon... */ if ( W != None) { /** Values for the trace */ StartTrace = W.InstantFireStartTrace(); EndTrace = StartTrace + W.MaxRange() * vector(PlayerOwner.Rotation); HitActor = MyPawnOwner.Trace(HitLocation, HitNormal, EndTrace, StartTrace, true, vect(0,0,0),, TRACEFLAG_Bullet); DrawDebugLine(StartTrace, EndTrace, 100,100,100,); /** Projection for the crosshair to convert 3d coords into 2d */ ScreenPos = Canvas.Project(HitLocation); /** If we haven't hit any actors... */ if ( Pawn(HitActor) == None ) { HitActor = (HitActor == None) ? None : Pawn(HitActor.Base); } } /** If our trace hits a pawn... */ if ((Pawn(HitActor) == None)) { /** Draws the crosshair for no one - Grey*/ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(100,100,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return false; } /** Draws the crosshair for friendlies - Yellow */ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(255,255,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return true; }

    Read the article

  • How to avoid the exception “Substitution controls cannot be used in cached User Controls or cached M

    - by DigiMortal
    Recently I wrote example about using user controls with donut caching. Because cache substitutions are not allowed inside partially cached controls you may get the error Substitution controls cannot be used in cached User Controls or cached Master Pages when breaking this rule. In this posting I will introduce some strategies that help to avoid this error. How Substitution control checks its location? Substitution control uses the following check in its OnPreRender method. protected internal override void OnPreRender(EventArgs e) {     base.OnPreRender(e);     for (Control control = this.Parent; control != null;          control = control.Parent)     {         if (control is BasePartialCachingControl)         {             throw new HttpException(SR.GetString("Substitution_CannotBeInCachedControl"));         }     } } It traverses all the control tree up to top from its parent to find at least one control that is partially cached. If such control is found then exception is thrown. Reusing the functionality If you want to do something by yourself if your control may cause exception mentioned before you can use the same code. I modified the previously shown code to be method that can be easily moved to user controls base class if you have some. If you don’t you can use it in controls where you need this check. protected bool IsInsidePartialCachingControl() {     for (Control control = Parent; control != null;         control = control.Parent)         if (control is BasePartialCachingControl)             return true;       return false; } Now it is up to you how to handle the situation where your control with substitutions is child of some partially cache control. You can add here also some debug level output so you can see exactly what controls in control hierarchy are cached and cause problems.

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >