Search Results

Search found 9015 results on 361 pages for 'wireless range'.

Page 321/361 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Unable to aquire image through ImageIO.read(url) because of connection timed out.

    - by Jake Frederix
    Following code always fails URL url = new URL("http://userserve-ak.last.fm/serve/126/8636005.jpg"); Image img = ImageIO.read(url); System.out.println(img); I've manually checked the url, and it is valid, and contains a valid jpg image. The problem I get is; Exception in thread "main" javax.imageio.IIOException: Can't get input stream from URL! at javax.imageio.ImageIO.read(ImageIO.java:1385) at maestro.Main2.main(Main2.java:25) Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163) at java.net.Socket.connect(Socket.java:546) at java.net.Socket.connect(Socket.java:495) at sun.net.NetworkClient.doConnect(NetworkClient.java:174) at sun.net.www.http.HttpClient.openServer(HttpClient.java:409) at sun.net.www.http.HttpClient.openServer(HttpClient.java:530) at sun.net.www.http.HttpClient.(HttpClient.java:240) at sun.net.www.http.HttpClient.New(HttpClient.java:321) at sun.net.www.http.HttpClient.New(HttpClient.java:338) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:814) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:755) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:680) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1005) at java.net.URL.openStream(URL.java:1029) at javax.imageio.ImageIO.read(ImageIO.java:1383) ... 1 more Java Result: 1 What does this mean? Funny thing is, if I change my internet-connection to that of the neighbour's wireless, it suddenly does work.

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • Setting objects (not users) inactive after period of time in asp.net mvc

    - by bastijn
    This question is mainly to verify my current idea. I have a series of objects which I want to be active for a specified amount of time. For instance objects like ads which are shown in the ad space only for the amount of time bought, objects in search results which should only pop up when active, and frontpage posts which should be set to inactive after a prespecified time. My current idea is to just give those type of objects a StartDate and EndDate and filter the search routines to only show results which fall in the range StartDate < currentDate < EndDate. Is this the normal structure or should there be some sort of auto-routine which routinely checks for objects which are "over-time" and set a property "inactive" to true or something. Seems like this approach is such a hassle since I need a checker which runs say, every 5 minutes, to scan all DB objects. Seems like a bad idea to me. So is the first structure the most commonly used or are there any other options? When searching on google or SO the search queries only return results setting users inactive.

    Read the article

  • Error while uploading file method in Client Object Model Sharepoint 2010

    - by user1481570
    Error while uploading file method in Client Object Model + Sharepoint 2010. Once the file got uploaded. After that though the code compiles with no error I get the error while executing "{"Value does not fall within the expected range."} {System.Collections.Generic.SynchronizedReadOnlyCollection} I have a method which takes care of functionality to upload files /////////////////////////////////////////////////////////////////////////////////////////// public void Upload_Click(string documentPath, byte[] documentStream) { String sharePointSite = "http://cvgwinbasd003:28838/sites/test04"; String documentLibraryUrl = sharePointSite +"/"+ documentPath.Replace('\\','/'); //////////////////////////////////////////////////////////////////// //Get Document List List documentsList = clientContext.Web.Lists.GetByTitle("Doc1"); var fileCreationInformation = new FileCreationInformation(); //Assign to content byte[] i.e. documentStream fileCreationInformation.Content = documentStream; //Allow owerwrite of document fileCreationInformation.Overwrite = true; //Upload URL fileCreationInformation.Url = documentLibraryUrl; Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add( fileCreationInformation); //uploadFile.ListItemAllFields.Update(); clientContext.ExecuteQuery(); } ///////////////////////////////////////////////////////////////////////////////////////////////// In the MVC 3.0 application in the controller I have defined the following method to invoke the upload method. ////////////////////////////////////////////////////////////////////////////////////////////////// public ActionResult ProcessSubmit(IEnumerable<HttpPostedFileBase> attachments) { System.IO.Stream uploadFileStream=null; byte[] uploadFileBytes; int fileLength=0; foreach (HttpPostedFileBase fileUpload in attachments) { uploadFileStream = fileUpload.InputStream; fileLength=fileUpload.ContentLength; } uploadFileBytes= new byte[fileLength]; uploadFileStream.Read(uploadFileBytes, 0, fileLength); using (DocManagementService.DocMgmtClient doc = new DocMgmtClient()) { doc.Upload_Click("Doc1/Doc2/Doc2.1/", uploadFileBytes); } return RedirectToAction("SyncUploadResult"); } ////////////////////////////////////////////////////////////////////////////////////////////////// Please help me to locate the error

    Read the article

  • Setting up a pc bluetooth server for android

    - by Del
    Alright, I've been reading a lot of topics the past two or three days and nothing seems to have asked this. I am writing a PC side server for my andriod device, this is for exchanging some information and general debugging. Eventually I will be connecting to a SPP device to control a microcontroller. I have managed, using the following (Android to pc) to connect to rfcomm channel 11 and exchange data between my android device and my pc. Method m = device.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); tmp = (BluetoothSocket) m.invoke(device, Integer.valueOf(11)); I have attempted the createRfcommSocketToServiceRecord(UUID) method, with absolutely no luck. For the PC side, I have been using the C Bluez stack for linux. I have the following code which registers the service and opens a server socket: int main(int argc, char **argv) { struct sockaddr_rc loc_addr = { 0 }, rem_addr = { 0 }; char buf[1024] = { 0 }; char str[1024] = { 0 }; int s, client, bytes_read; sdp_session_t *session; socklen_t opt = sizeof(rem_addr); session = register_service(); s = socket(AF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM); loc_addr.rc_family = AF_BLUETOOTH; loc_addr.rc_bdaddr = *BDADDR_ANY; loc_addr.rc_channel = (uint8_t) 11; bind(s, (struct sockaddr *)&loc_addr, sizeof(loc_addr)); listen(s, 1); client = accept(s, (struct sockaddr *)&rem_addr, &opt); ba2str( &rem_addr.rc_bdaddr, buf ); fprintf(stderr, "accepted connection from %s\n", buf); memset(buf, 0, sizeof(buf)); bytes_read = read(client, buf, sizeof(buf)); if( bytes_read 0 ) { printf("received [%s]\n", buf); } sprintf(str,"to Android."); printf("sent [%s]\n",str); write(client, str, sizeof(str)); close(client); close(s); sdp_close( session ); return 0; } sdp_session_t *register_service() { uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; uint8_t rfcomm_channel = 11; const char *service_name = "Remote Host"; const char *service_dsc = "What the remote should be connecting to."; const char *service_prov = "Your mother"; uuid_t root_uuid, l2cap_uuid, rfcomm_uuid, svc_uuid; sdp_list_t *l2cap_list = 0, *rfcomm_list = 0, *root_list = 0, *proto_list = 0, *access_proto_list = 0; sdp_data_t *channel = 0, *psm = 0; sdp_record_t *record = sdp_record_alloc(); // set the general service ID sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); sdp_set_service_id( record, svc_uuid ); // make the service record publicly browsable sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root_list = sdp_list_append(0, &root_uuid); sdp_set_browse_groups( record, root_list ); // set l2cap information sdp_uuid16_create(&l2cap_uuid, L2CAP_UUID); l2cap_list = sdp_list_append( 0, &l2cap_uuid ); proto_list = sdp_list_append( 0, l2cap_list ); // set rfcomm information sdp_uuid16_create(&rfcomm_uuid, RFCOMM_UUID); channel = sdp_data_alloc(SDP_UINT8, &rfcomm_channel); rfcomm_list = sdp_list_append( 0, &rfcomm_uuid ); sdp_list_append( rfcomm_list, channel ); sdp_list_append( proto_list, rfcomm_list ); // attach protocol information to service record access_proto_list = sdp_list_append( 0, proto_list ); sdp_set_access_protos( record, access_proto_list ); // set the name, provider, and description sdp_set_info_attr(record, service_name, service_prov, service_dsc); int err = 0; sdp_session_t *session = 0; // connect to the local SDP server, register the service record, and // disconnect session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); err = sdp_record_register(session, record, 0); // cleanup //sdp_data_free( channel ); sdp_list_free( l2cap_list, 0 ); sdp_list_free( rfcomm_list, 0 ); sdp_list_free( root_list, 0 ); sdp_list_free( access_proto_list, 0 ); return session; } And another piece of code, in addition to 'sdptool browse local' which can verifty that the service record is running on the pc: int main(int argc, char **argv) { uuid_t svc_uuid; uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; int err; bdaddr_t target; sdp_list_t *response_list = NULL, *search_list, *attrid_list; sdp_session_t *session = 0; str2ba( "01:23:45:67:89:AB", &target ); // connect to the SDP server running on the remote machine session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); // specify the UUID of the application we're searching for sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); search_list = sdp_list_append( NULL, &svc_uuid ); // specify that we want a list of all the matching applications' attributes uint32_t range = 0x0000ffff; attrid_list = sdp_list_append( NULL, &range ); // get a list of service records that have UUID 0xabcd err = sdp_service_search_attr_req( session, search_list, \ SDP_ATTR_REQ_RANGE, attrid_list, &response_list); sdp_list_t *r = response_list; // go through each of the service records for (; r; r = r-next ) { sdp_record_t *rec = (sdp_record_t*) r-data; sdp_list_t *proto_list; // get a list of the protocol sequences if( sdp_get_access_protos( rec, &proto_list ) == 0 ) { sdp_list_t *p = proto_list; // go through each protocol sequence for( ; p ; p = p-next ) { sdp_list_t *pds = (sdp_list_t*)p-data; // go through each protocol list of the protocol sequence for( ; pds ; pds = pds-next ) { // check the protocol attributes sdp_data_t *d = (sdp_data_t*)pds-data; int proto = 0; for( ; d; d = d-next ) { switch( d-dtd ) { case SDP_UUID16: case SDP_UUID32: case SDP_UUID128: proto = sdp_uuid_to_proto( &d-val.uuid ); break; case SDP_UINT8: if( proto == RFCOMM_UUID ) { printf("rfcomm channel: %d\n",d-val.int8); } break; } } } sdp_list_free( (sdp_list_t*)p-data, 0 ); } sdp_list_free( proto_list, 0 ); } printf("found service record 0x%x\n", rec-handle); sdp_record_free( rec ); } sdp_close(session); } Output: $ ./search rfcomm channel: 11 found service record 0x10008 sdptool: Service Name: Remote Host Service Description: What the remote should be connecting to. Service Provider: Your mother Service RecHandle: 0x10008 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 11 And for logcat I'm getting this: 07-22 15:57:06.087: ERROR/BTLD(215): ****************search UUID = 0000*********** 07-22 15:57:06.087: INFO//system/bin/btld(209): btapp_dm_GetRemoteServiceChannel() 07-22 15:57:06.087: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:06.097: INFO/ActivityManager(88): Displayed activity com.example.socktest/.socktest: 79 ms (total 79 ms) 07-22 15:57:06.697: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:07.517: WARN/BTLD(215): ccb timer ticks: 2147483648 07-22 15:57:07.517: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:07.547: WARN/BTLD(215): info:x10 07-22 15:57:07.547: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 10 pbytes (hdl 14) 07-22 15:57:07.547: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up() 07-22 15:57:07.547: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up: dummy_handle = 342 07-22 15:57:07.547: DEBUG/ADAPTER(253): adapter_get_device(00:02:72:AB:7C:EE) 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:07.777: WARN/BTLD(215): process_service_search_attr_rsp 07-22 15:57:07.787: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 14) 07-22 15:57:07.787: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_rmt_service_channel: success=0, service=00000000 07-22 15:57:07.787: ERROR/DTUN_HCID_BZ4(253): discovery unsuccessful! 07-22 15:57:08.497: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:09.507: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:09.597: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 11 pbytes (hdl 14) 07-22 15:57:09.597: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down() 07-22 15:57:09.597: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down device = 0xf7a0 handle = 342 reason = 22 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:09.597: DEBUG/BluetoothA2dpService(88): Received intent Intent { act=android.bluetooth.device.action.ACL_DISCONNECTED (has extras) } 07-22 15:57:10.107: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:12.107: DEBUG/BluetoothService(88): Cleaning up failed UUID channel lookup: 00:02:72:AB:7C:EE 00000000-0000-0000-0000-000000000000 07-22 15:57:12.107: ERROR/Socket Test(5234): connect() failed 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: fd 31 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: bind not completed on this socket 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: event BTLIF_BTS_EVT_ABORT matched 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wrp_close_s_only [31] (31:-1) [] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: data socket closed 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wsactive_del: delete wsock 31 from active list [ad3e1494] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wsock fully closed, return to pool 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btsk_free: success 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_destroy 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: btsk not found, normal close (31) 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 33 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (33) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 32 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (32) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 31 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (31) 07-22 15:57:12.157: DEBUG/Sensors(88): close_akm, fd=151 07-22 15:57:12.167: ERROR/CachedBluetoothDevice(477): onUuidChanged: Time since last connect14970690 07-22 15:57:12.237: DEBUG/Socket Test(5234): -On Stop- Sorry for bombarding you guys with what seems like a difficult question and a lot to read, but I've been working on this problem for a while and I've tried a lot of different things to get this working. Let me reiterate, I can get it to work, but not using service discovery protocol. I've tried a several different UUIDs and on two different computers, although I only have my HTC Incredible to test with. I've also heard some rumors that the BT stack wasn't working on the HTC Droid, but that isn't the case, at least, for PC interaction.

    Read the article

  • WCF Data Services - neither .Expand or .LoadProperty seems to do what I need

    - by TomK
    I am building a school management app where they track student tardiness and absences. I've got three entities to help me in this. A Students entity (first name, last name, ID, etc.); a SystemAbsenceTypes entity with SystemAbsenceTypeID values for Late, Absent-with-Reason, Absent-without-Reason; and a cross-reference table called StudentAbsences (matching the student IDs with the absence-type ID, plus a date, and a Notes field). What I want to do is query my entities for a given student, and then add up the number of each kind of Absence, for a given date range. I prepare my currentStudent object without a problem, then I do this... Me.Data.LoadProperty(currentStudent, "StudentAbsences") 'Loads the cross-ref data lblDaysLate.Text = (From ab In currentStudent.StudentAbsences Where ab.SystemAbsenceTypes.SystemAbsenceTypeID = Common.enuStudentAbsenceTypes.Late).Count.ToString ...and this second line fails, complaining it has no value for an object. I presume the problem is that while it DOES see that there are (let's say) four absences for the currentStudent (ie, currentStudent.StudentAbsences.Count = 4) -- it can't yet "peer into" each one of the absences to look at its type. How do I use .Expand or .LoadProperty to make this happen? I tried fiddling with .LoadProperty but it doesn't take a two-level syntax like so... Data.LoadProperty(currentStudent,"StudentAbsences.SystemAbsenceTypeID") or the like. Is there some other technique?

    Read the article

  • How to make a custom NSFormatter work correctly on Snow Leopard?

    - by Nathan
    I have a custom NSFormatter attached to several NSTextFields who's only purpose is to uppercase the characters as they are typed into field. The entire code for my formatter is included below. The stringForObjectValue() and getObjectValue() implementations are no-ops and taken pretty much directly out of Apple's documentation. I'm using the isPartialStringValid() method to return an uppercase version of the string. This code works correctly in 10.4 and 10.5. When I run it on 10.6, I get "strange" behaviour where text fields aren't always render the characters that are typed and sometimes are just displaying garbage. I've tried enabling NSZombie detection and running under Instruments but nothing was reported. I see errors like the following in "Console": HIToolbox: ignoring exception '*** -[NSCFString replaceCharactersInRange:withString:]: Range or index out of bounds' that raised inside Carbon event dispatch ( 0 CoreFoundation 0x917ca58a __raiseError + 410 1 libobjc.A.dylib 0x94581f49 objc_exception_throw + 56 2 CoreFoundation 0x917ca2b8 +[NSException raise:format:arguments:] + 136 3 CoreFoundation 0x917ca22a +[NSException raise:format:] + 58 4 Foundation 0x9140f528 mutateError + 218 5 AppKit 0x9563803a -[NSCell textView:shouldChangeTextInRange:replacementString:] + 852 6 AppKit 0x95636cf1 -[NSTextView(NSSharing) shouldChangeTextInRanges:replacementStrings:] + 1276 7 AppKit 0x95635704 -[NSTextView insertText:replacementRange:] + 667 8 AppKit 0x956333bb -[NSTextInputContext handleTSMEvent:] + 2657 9 AppKit 0x95632949 _NSTSMEventHandler + 209 10 HIToolbox 0x93379129 _ZL23DispatchEventToHandlersP14EventTargetRecP14OpaqueEventRefP14HandlerCallRec + 1567 11 HIToolbox 0x933783f0 _ZL30SendEventToEventTargetInternalP14OpaqueEventRefP20OpaqueEventTargetRefP14HandlerCallRec + 411 12 HIToolbox 0x9339aa81 SendEventToEventTarget + 52 13 HIToolbox 0x933fc952 SendTSMEvent + 82 14 HIToolbox 0x933fc2cf SendUnicodeTextAEToUnicodeDoc + 700 15 HIToolbox 0x933fbed9 TSMKeyEvent + 998 16 HIToolbox 0x933ecede TSMProcessRawKeyEvent + 2515 17 AppKit 0x95632228 -[NSTextInputContext handleEvent:] + 1453 18 AppKit 0x9562e088 -[NSView interpretKeyEvents:] + 209 19 AppKit 0x95631b45 -[NSTextView keyDown:] + 751 20 AppKit 0x95563194 -[NSWindow sendEvent:] + 5757 21 AppKit 0x9547bceb -[NSApplication sendEvent:] + 6431 22 AppKit 0x9540f6fb -[NSApplication run] + 917 23 AppKit 0x95407735 NSApplicationMain + 574 24 macsetup 0x00001f9f main + 24 25 macsetup 0x00001b75 start + 53 ) Can anybody shed some light on what is happening? Am I just using NSFormatter incorrectly? -(NSString*) stringForObjectValue:(id)object { if( ![object isKindOfClass: [ NSString class ] ] ) { return nil; } return [ NSString stringWithString: object ]; } -(BOOL)getObjectValue: (id*)object forString: string errorDescription:(NSString**)error { if( object ) { *object = [ NSString stringWithString: string ]; return YES; } return NO; } -(BOOL) isPartialStringValid: (NSString*) cStr newEditingString: (NSString**) nStr errorDescription: (NSString**) error { *nStr = [NSString stringWithString: [cStr uppercaseString]]; return NO; }

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • Some help required while working on Java and Cygwin together

    - by Hippo
    Hello .. I am new to java and also cygwin . I do not have in detailed knowledge of both . I need some help.. I simple steps i will try to explain my problem. 1) I am working on tinyOS . its open source OS , used for wireless sensor networks. It provides java libraries to work on communication (PC to sensor) 2) I am working on windows xp environment through cigwin. 3) I am developing an application . THis application requires one java interface called "Serial Forwarder" , which is readily available in libraries provided. Previously i used to start this interface manually (by entering command *"java net.tinyos.sf.SerialForwarder ")*and then my application which uses this interface. But now i want to make my application independent . User need know about this background cygwin commands . 4) So in my java application i used "Runtime.getRuntime().exec( "java net.tinyos.sf.SerialForwarder)" . 5) This i neither giving any error nor starting the interface. Am I going on right way ? When i am using runtime execute command , how can i make sure that this command is called through cigwin interface ? Also .. if i want to write .bat file .. i which i can give commands which will be executed .. how can i make sure that those commands are given through cigwin .. and not through cmd.exe .. Please help . me .

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • DSP - Filtering frequencies using DFT

    - by Trap
    I'm trying to implement a DFT-based 8-band equalizer for the sole purpose of learning. To prove that my DFT implementation works I fed an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. This method calculates the real and imaginary parts both N/2 + 1 samples in length. To attenuate a frequency I'm just doing: float atnFactor = 0.6; Re[k] *= atnFactor; Im[k] *= atnFactor; where 'k' is an index in the range 0 to N/2, but what I get after resynthesis is a slighty distorted signal, especially at low frequencies. The input signal sample rate is 44.1 khz and since I just want a 8-band equalizer I'm feeding the DFT 16 samples at a time so I have 8 frequency bins to play with. Can someone show me what I'm doing wrong? I tried to find info on this subject on the internet but couldn't find any. Thanks in advance.

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • Test Views in ASP.NET MVC2 (ala RSpec)

    - by Dmitriy Nagirnyak
    Hi, I am really missing heavily the ability to test Views independently of controllers. The way RSpec does it. What I want to do is to perform assertions on the rendered view (where no controller is involved!). In order to do so I should provide required Model, ViewData and maybe some details from HttpContextBase (when will we get rid of HttpContext!). So far I have not found anything that allows doing it. Also it might heavily depend on the ViewEngine being used. List of things that views might contain are: Partial views (may be nested deeply). Master pages (or similar in other view engines). Html helpers generating links and other elements. Generally almost anything in a range of common sense :) . Also please note that I am not talking about client-side testing and thus Selenium is just not related to it at all. It is just plain .NET testing. So are there any options to actually do the testing of views? Thanks, Dmitriy.

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Planning management slots/sessions

    - by Glide
    I have a planning structure on two tables to store available slots by day, and sessions. A slot is defined by a range of time in the day. CREATE TABLE slot ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); Sessions can't overlap themselves and must be wrapped in a slot. CREATE TABLE session ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); I need to generate a list of available blocks of time of a certain duration, in order to create sessions. Example: INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; 2010-01-01 <##><####> <- Sessions ------------------------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 2010-01-02 <##########> <########> <- Sessions -------------------- ------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 I need to know which spaces of 1 hour I can use: +------------+-------+-------+ | date | start | end | +------------+-------+-------+ | 2010-01-01 | 13:00 | 14:00 | | 2010-01-01 | 14:00 | 15:00 | | 2010-01-01 | 15:00 | 16:00 | | 2010-01-01 | 16:00 | 17:00 | | 2010-01-01 | 17:00 | 18:00 | | 2010-01-01 | 18:00 | 19:00 | | 2010-01-02 | 10:00 | 11:00 | | 2010-01-02 | 11:00 | 12:00 | | 2010-01-02 | 16:00 | 17:00 | +------------+-------+-------+

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Javascript: Error 'Object Required.'

    - by javascripthelp
    The following is the error popup message I get when I click "Finalize" button on my website: "Line: 298 Char: 5 Error: Object required: 'lobi_c_selected(...)' Code: 0 URL: http://10.128.23.50/i-prostage/AP/w_ap_check_reconciliation.asp?" Normally, when I click "Finalize" button, it'll generate and show a report in a popup window. However, I'd get this error message instead. Can any of you help me locate the error in the following source code for the page that I'm running in IE 6? sub cb_finalize dim ll_loop, ll_found, lobj_c_selected of_SetHourGlass(True) rpt_link.innerHTML = "" rpt_link.href = "" 'Process only if at least one record was selected if rds1.Recordset.Recordcount 0 then lb_found = false if rds1.Recordset.Recordcount = 1 then if c_selected.checked then lb_found = true else Set lobj_c_selected = document.all.item("c_selected") for ll_loop = 1 to rds1.Recordset.Recordcount if lobj_c_selected(ll_loop - 1).checked then lb_found = true exit for end if next end if if not lb_found then msgbox "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) else window.setTimeout "ue_process()", 100, vbscript 'Post Event end if else msgbox "There's no record to be posted." + vbcrlf + "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) end if end sub Sub ue_process dim ll_loop, ll_count, ls_ret, ls_trxid, ls_r1 'Get only selected records redim ls_trxid(rds1.Recordset.Recordcount) for ll_loop = 1 to rds1.Recordset.Recordcount rds1.Recordset.AbsolutePosition = ll_loop if not isnull(rds1.Recordset("clrdt")) then 'Add to TRXID array if selected ll_count = ll_count + 1 ls_trxid(ll_count) = rds1.Recordset("trxid") end if next 'Process reconciliation rds1.Recordset.MarshalOptions = 1 ls_ret = iBO_Update.of_update_1(is_dbsrc, rds1.Recordset, "GLTRX", is_sql) if ls_ret < "1" then msgbox "Update Failed ! " + ls_ret, vbExclamation + vbOKonly, document.title of_SetHourGlass(False) else 'Display Posting Journal & clear screen ue_posting_journal ls_trxid, ll_count Set rds1.SourceRecordset = iBO_Company.of_validate(is_dbsrc, "SELECT 1 FROM DUAL WHERE 1 = 2") ib_query = false 'Not to process RetrieveEnd end if End Sub Sub ue_posting_journal(as_trxid, al_count) dim ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq of_setreport() ' Start service 'Prepare arguments for report in RPTMSTR table for ll_loop = 1 to al_count + 1 select case ll_loop case 1 'Range displayed as report title ll_argseq = 800 ls_argtyp = null ls_argmnt = "st_title.text = 'Bank: " + bnkid_name.value + ", As of Date: " + _ of_date_stringtodate(id_trxdt) + "'" ll_sargseq = 0 case else 'TRXID array ll_argseq = 1 ll_sargseq = ll_loop - 1 ls_argtyp = "S" ls_argmnt = as_trxid(ll_loop - 1) end select of_report_register_array "d_rpt_ap_check_reconciliation_register", ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq next of_report_process "d_rpt_ap_check_reconciliation_register", true, true 'Display report of_sethourglass(False) End Sub

    Read the article

  • Fixing color in scatter plots in matplotlib

    - by ajhall
    Hi guys, I'm going to have to come back and add some examples if you need them, which you might. But, here's the skinny- I'm plotting scatter plots of lab data for my research. I need to be able to visually compare the scatter plots from one plot to the next, so I want to fix the color range on the scatter plots and add in a colorbar to each plot (which will be the same in each figure). Essentially, I'm fixing all aspects of the axes and colorspace etc. so that the plots are directly comparable by eye. For the life of me, I can't seem to get my scatter() command to properly set the color limits in the colorspace (default)... i.e., I figure out my total data's min and total data's max, then apply them to vmin, vmax, for the subset of data, and the color still does not come out properly in both plots. This must come up here and there, I can't be the only one that wants to compare various subsets of data amongst plots... so, how do you fix the colors so that each data keeps it's color between plots and doesn't get remapped to a different color due to the change in max/min of the subset -v- the whole set? I greatly appreciate all your thoughts!!! A mountain-dew and fiery-hot cheetos to all! -Allen

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • MVC Localization of Default Model Binder

    - by Dai Bok
    Hi, I am currently trying to figure out how to localize the error messages generated by MVC. Let me use the default model binder as an example, so I can explain the problem. Assuming I have a form, where a user enters thier age. The user then enters "ten" in to the form, but instead of getting the expected error of "Age must be beween 18 and 25." the message "The value 'ten' is not valid for Age." is displayed. The entity's age property is defined below: [Range(18, 25, ErrorMessageResourceType = typeof (Errors), ErrorMessageResourceName = "Age", ErrorMessage = "Range_ErrorMessage")] public int Age { get; set; } After some digging, I notice that this error text comes from the System.Web.Mvc.Resources.DefaultModelBinder_ValueInvalid in the MvcResources.resx file. Now, how can create localized versions of this file? As A solution, for example, should I download MVC source and add MvcResources.en_GB.resx, MvcResources.fr_FR.resx, MvcResources.es_ES.resx and MvcResources.de_DE.resx, and then compile my own version of MVC.dll? But I don't like this idea. Any one else know a better way?

    Read the article

  • Basic syntax for an animation loop?

    - by Moshe
    I know that jQuery, for example, can do animation of sorts. I also know that at the very core of the animation, there must me some sort of loop doing the animation. What is an example of such a loop? A complete answer should ideally answer the following questions: What is a basic syntax for an effective animation recursion that can animate a single property of a particular object at a time? The function should be able to vary its target object and property of the object. What arguments/parameters should it take? What is a good range of reiterating the loop? In milliseconds? (Should this be a parameter/argument to the function?) REMEMBER: The answer is NOT necessarily language specific, but if you are writing in a specific language, please specify which one. Error handling is a plus. {Nothing is more irritating (for our purposes) than an animation that does something strange, like stopping halfway through.} Thanks!

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >