Search Results

Search found 36650 results on 1466 pages for 'random access'.

Page 272/1466 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • [C# Thread] I'd like access to a share on the network!

    - by JustLooking
    Some Details I am working with VisualWebGUI, so this app is like ASP.NET, and it is deployed on IIS 7 (for testing) For my 'Web Site', Anonymous Authentication is set to a specific user (DomainName\DomainUser). In my web.config, I have impersonation on. This is how I got my app to access the share in the first place. The Problem There is a point in the the app where we use the Thread class, something similar to: Thread myThread = new Thread(new ThreadStart(objInstance.PublicMethod)); myThread.Start(); What I have noticed is that I can write to my logs (text file on the share), everywhere throughout my code, except in the thread that I kicked off. I added some debugging output and what I see for users is: The thread that's kicked off: NT AUTHORITY\NETWORK SERVICE Everywhere else in my code: DomainName\DomainUser (described in my IIS setup) OK, for some reason the thread gets a different user (NETWORK SERVICE). Fine. But, my share (and the actual log file) was given 'Full Control' to the NETWORK SERVICE user (this share resides on a different server than the one that my app is running). If NETWORK SERVICE has rights to this folder, why do I get access denied? Or is there a way to have the thread I kick off have the same user as the process?

    Read the article

  • The HTTP verb POST used to access path '[my path]' is not allowed.

    - by Jed
    I am receiving an error that states: "The HTTP verb POST used to access path '[my path]' is not allowed.". The error is being caused by the fact that I am implementing an HTML form element that uses the POST method and does not explicitly define an .aspx page in its ACTION parameter. For example: <form action="" method="post"> <input type="submit" /> </form> The HTML above is on a file at "/foo/default.aspx". Now, if the user points the URL to the root directory "foo" without specifying the aspx file (i.e. "http://localhost/foo") and then submits the form, the error "The HTTP verb POST used to access path '/foo' is not allowed." will be thrown. However, if the user goes to "http://localhost/foo/default.aspx" and then submits the form, all goes well (even if the ACTION parameter is left empty). Note: If I explicitly add the name of the .aspx (default.aspx) page to the ACTION parameter, no errors are thrown. So the example below works fine regardless if the user defines the name of the file in the URL or not. <form action="default.aspx" method="post"> <input type="submit" /> </form> I was curious as to why the error was being thrown, so I read a Microsoft KB that states This problem occurs because a client makes an HTTP request by sending the POST method to a static HTML page. Static HTML pages do not support the POST method. I suppose the core of the explanation makes sense, however in my case, my form is not being sent to a static html page - it's being sent to the same page that the html form lives on (default.aspx)... this is implicit to an ACTION param that is left empty. Is it possible to configure IIS (or otherwise) that will allow us to do form POSTing and keep the ACTION param empty?

    Read the article

  • Is there a faster way to access a property member of a class using reflection?

    - by Ross Goddard
    I am currently using the following code to access the property of an object using reflection: Dim propInfo As Reflection.PropertyInfo = myType.GetProperty(propName) Dim objValue As Object = propInfo.GetValue(myObject, Nothing) I am having some issues with the speed since this type of code is being called many times and is causing some slowdown. I have been looking into using Refelction.Emit or dynamic methods, but I am not sure exactly how to make use of them. Background Information: I am creating a list of a subset of the properties of the object, associating then with some meta information (such as if they can be loaded from the database or xml, if they are editable, can the user see them). This is for later consumption so we can write code such as : foreach prop as BaseWrapper in graphNode.NodeProperties prop.LoadFromDataRow(dr) next The application makes heavy use of having access to this list. The problem is that on the initial load of a project, a larger number of objects are being created that make use of this, so for each object created it is looping through this code a number of times. I initially tried adding each property to the list manually, but this ran into problems with not everything being initialized at the correct time and some other issues. If there is no other good way, then I may have to rethink some of the design and see what else can be done to improve the performance.

    Read the article

  • How to limit Access-Control-Allow-Origin to a specific path?

    - by coderama
    I have a website that servies ads via javascript. So, I basically allow the user to include my script.... : <script src="http://www.example.com/ads.js" ></script> <script> MYADDS.insertAdvert(); </script> The problem is, I kept getting: "No Access-Control-Allow-Origin" Errors. That was until I added this to my htaccess file: <IfModule mod_headers.c> Header set Access-Control-Allow-Origin "*" </IfModule> Problem is, this opens up my entire site and is probably a security risk. So, seeing as the ads.js file actually only does an ajax request to: http://www.example.com/place/where/my/adds/are/fed/from How can I make the above htaccess rule only apply to that path? Keep in mind, it's not an actualy directory, so I can't put the htaccess file in that folder. It's actually a "virtual path". The site is built using Laravel and therefore does the typical laravel path rewriting. Here's teh full htaccess file: <IfModule mod_rewrite.c> <IfModule mod_negotiation.c> Options -MultiViews </IfModule> RewriteEngine On # Redirect Trailing Slashes... RewriteRule ^(.*)/$ /$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> Any ideas how to do this?

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • How can I forward ALL traffic over a site-to-site VPN on Cisco ASA?

    - by Scott Clements
    Hi There, I currently have two Cisco ASA 5100 routers. They are at different physical sites and are configured with a site-to-site VPN which is active and working. I can communicate with the subnets on either site from the other and both are connected to the internet, however I need to ensure that all the traffic at my remote site goes through this VPN to my site here. I know that the web traffic is doing so as a "tracert" confirms this, but I need to ensure that all other network traffic is being directed over this VPN to my network here. Here is my config for the ASA router at my remote site: hostname ciscoasa domain-name xxxxx enable password 78rl4MkMED8xiJ3g encrypted names ! interface Ethernet0/0 nameif NIACEDC security-level 100 ip address x.x.x.x 255.255.255.0 ! interface Ethernet0/1 description External Janet Connection nameif JANET security-level 0 ip address x.x.x.x 255.255.255.248 ! interface Ethernet0/2 shutdown no nameif security-level 100 no ip address ! interface Ethernet0/3 shutdown no nameif security-level 100 ip address dhcp setroute ! interface Management0/0 nameif management security-level 100 ip address 192.168.100.1 255.255.255.0 management-only ! passwd 2KFQnbNIdI.2KYOU encrypted ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup NIACEDC dns server-group DefaultDNS name-server 154.32.105.18 name-server 154.32.107.18 domain-name XXXX same-security-traffic permit inter-interface same-security-traffic permit intra-interface access-list ren_access_in extended permit ip any any access-list ren_access_in extended permit tcp any any access-list ren_nat0_outbound extended permit ip 192.168.6.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_nat0_outbound extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list JANET_20_cryptomap extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_access_in extended permit ip any any access-list NIACEDC_access_in extended permit tcp any any access-list JANET_access_out extended permit ip any any access-list NIACEDC_access_out extended permit ip any any pager lines 24 logging enable logging asdm informational mtu NIACEDC 1500 mtu JANET 1500 mtu management 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-522.bin no asdm history enable arp timeout 14400 nat-control global (NIACEDC) 1 interface global (JANET) 1 interface nat (NIACEDC) 0 access-list NIACEDC_nat0_outbound nat (NIACEDC) 1 192.168.12.0 255.255.255.0 access-group NIACEDC_access_in in interface NIACEDC access-group NIACEDC_access_out out interface NIACEDC access-group JANET_access_out out interface JANET route JANET 0.0.0.0 0.0.0.0 194.82.121.82 1 route JANET 0.0.0.0 0.0.0.0 192.168.3.248 tunneled timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute http server enable http 192.168.12.0 255.255.255.0 NIACEDC http 192.168.100.0 255.255.255.0 management http 192.168.9.0 255.255.255.0 NIACEDC no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto map JANET_map 20 match address JANET_20_cryptomap crypto map JANET_map 20 set pfs crypto map JANET_map 20 set peer X.X.X.X crypto map JANET_map 20 set transform-set ESP-AES-256-SHA crypto map JANET_map interface JANET crypto isakmp enable JANET crypto isakmp policy 10 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 50 authentication pre-share encryption aes-256 hash sha group 5 lifetime 86400 tunnel-group X.X.X.X type ipsec-l2l tunnel-group X.X.X.X ipsec-attributes pre-shared-key * telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd address 192.168.100.2-192.168.100.254 management dhcpd enable management ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect http ! service-policy global_policy global prompt hostname context no asdm history enable Thanks in advance, Scott

    Read the article

  • Scapy Installed, when i use it as module Its full of errors ???

    - by Rami Jarrar
    I installed scapy 2.xx (after get some missed modules to make it install),, then i'm trying to use it as module in my python programs,, but i cant it give me alot of errors, I download and installed some missed modules and finally i'm depressed, because this error, after hard work i got this Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> from scapy.all import * File "C:\Python26\scapy\all.py", line 43, in <module> from crypto.cert import * File "C:\Python26\scapy\crypto\cert.py", line 15, in <module> from Crypto.PublicKey import * File "C:\Python26\lib\Crypto\PublicKey\RSA.py", line 34, in <module> from Crypto import Random File "C:\Python26\lib\Crypto\Random\__init__.py", line 29, in <module> import _UserFriendlyRNG File "C:\Python26\lib\Crypto\Random\_UserFriendlyRNG.py", line 36, in <module> from Crypto.Random.Fortuna import FortunaAccumulator File "C:\Python26\lib\Crypto\Random\Fortuna\FortunaAccumulator.py", line 36, in <module> import FortunaGenerator File "C:\Python26\lib\Crypto\Random\Fortuna\FortunaGenerator.py", line 32, in <module> from Crypto.Util import Counter File "C:\Python26\lib\Crypto\Util\Counter.py", line 27, in <module> import _counter ImportError: No module named _counter by do the following code: from scapy.all import * p=sr1(IP(dst=ip_dst)/ICMP()) if p: p.show() so what should i do,, is there a solution for this ???

    Read the article

  • How to display image within cell boundary

    - by Jessy
    How can I place an image within the cell boundary? I mean without taking the space of other cell? In the code below, random cells were selected to display images. One image in one cell. The problem is that, the image seems to take other cells as well. ... setPreferredSize(new Dimension(600,600)); final int ROWS = 6; final int COLS = 6; final int IMAGES = 10; setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); gc.weightx = 1d; gc.weighty = 1d; gc.insets = new Insets(0, 0, 0, 0);//top, left, bottom, and right gc.fill = GridBagConstraints.NONE; JLabel[][] label = new JLabel[ROWS][COLS]; Random rand = new Random(); // fill the panel with labels for (int i=0;i<IMAGES;i++){ ImageIcon icon = createImageIcon("myImage.jpg"); int r, c; do{ //pick random cell which is empty to avoid overlap image in the same cell r = (int)Math.floor(Math.random() * ROWS); c = (int)Math.floor(Math.random() * COLS); } while (label[r][c]!=null); //scale the image int x = rand.nextInt(20)+30; int y = rand.nextInt(20)+30; Image image = icon.getImage().getScaledInstance(x,y, Image.SCALE_SMOOTH); icon.setImage(image); JLabel lbl = new JLabel(icon); gc.gridx = r; gc.gridy = c; add(lbl, gc); //add image to the cell label[r][c] = lbl; }

    Read the article

  • How to display image within cell boundary

    - by Jessy
    How can I place an image within the cell boundary? I mean without taking the space of other cell? In the code below, random cells were selected to display images. One image in one cell. The problem is that, the image seems to take other cells as well. ... setPreferredSize(new Dimension(600,600)); final int ROWS = 6; final int COLS = 6; final int IMAGES = 10; setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); gc.weightx = 1d; gc.weighty = 1d; gc.insets = new Insets(0, 0, 0, 0);//top, left, bottom, and right gc.fill = GridBagConstraints.NONE; JLabel[][] label = new JLabel[ROWS][COLS]; Random rand = new Random(); // fill the panel with labels for (int i=0;i<IMAGES;i++){ ImageIcon icon = createImageIcon("myImage.jpg"); int r, c; do{ //pick random cell which is empty to avoid overlap image in the same cell r = (int)Math.floor(Math.random() * ROWS); c = (int)Math.floor(Math.random() * COLS); } while (label[r][c]!=null); //scale the image int x = rand.nextInt(20)+30; int y = rand.nextInt(20)+30; Image image = icon.getImage().getScaledInstance(x,y, Image.SCALE_SMOOTH); icon.setImage(image); JLabel lbl = new JLabel(icon); gc.gridx = r; gc.gridy = c; add(lbl, gc); //add image to the cell label[r][c] = lbl; }

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Wrapper class that creates objects at runtime and stores data in an array.

    - by scriptingalias
    I tried making a wrapper class that encapsulates an object, a string (for naming and differentiating the object instance), and an array to store data. The problem I'm having now is accessing this class using methods that determine the "name" of the object and also reading the array containing some random variables. import java.util.Arrays; import java.util.Random; public class WrapperClass { String varName; Object varData; int[] array = new int[10]; public WrapperClass(String name, Object data, int[] ARRAY) { varName = name; varData = data; array = ARRAY; } public static void getvalues() { } public static void main(String[] args) { int[] array = new int[10]; Random random = new Random(3134234); for(int i = 0; i < 10; i++) { for (int c = 0; c < 10; c++) { array[c] = random.nextInt();//randomly creates data } WrapperClass w = new WrapperClass("c" + i, new Object(),array); } } }

    Read the article

  • reshaping a data frame into long format in R

    - by user1773115
    I'm struggling with a reshape in R. I have 2 types of error (err and rel_err) that have been calculated for 3 different models. This gives me a total of 6 error variables (i.e. err_1, err_2, err_3, rel_err_1, rel_err_2, and rel_err_3). For each of these types of error I have 3 different types of predivtive validity tests (ie random holdouts, backcast, forecast). I would like to make my data set long so I keep the 4 types of test long while also making the two error measurements long. So in the end I will have one variable called err and one called rel_err as well as an id variable for what model the error corresponds to (1,2,or 3) Here is my data right now: iter err_1 rel_err_1 err_2 rel_err_2 err_3 rel_err_3 test_type 1 -0.09385732 -0.2235443 -0.1216982 -0.2898543 -0.1058366 -0.2520759 random 1 0.16141630 0.8575728 0.1418732 0.7537442 0.1584816 0.8419816 back 1 0.16376930 0.8700738 0.1431505 0.7605302 0.1596502 0.8481901 front 1 0.14345986 0.6765194 0.1213689 0.5723444 0.1374676 0.6482615 random 1 0.15890059 0.7435382 0.1589823 0.7439204 0.1608709 0.7527580 back 1 0.14412360 0.6743928 0.1442039 0.6747684 0.1463520 0.6848202 front and here is what I would like it to look like: iter model err rel_err test_type 1 1 -0.09385732 (#'s) random 1 2 -0.1216982 (#'s) random 1 3 -0.1216982 (#'s) random and on... I've tried playing around with the syntax but can't quite figure out what to put for the time.varying argument Thanks very much for any help you can offer.

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • What is the point of having a key_t if what will be the key to access shared memory is the return value of shmget()?

    - by devoured elysium
    When using shared memory, why should we care about creating a key key_t ftok(const char *path, int id); in the following bit of code? key_t key; int shmid; key = ftok("/home/beej/somefile3", 'R'); shmid = shmget(key, 1024, 0644 | IPC_CREAT); From what I've come to understand, what is needed to access a given shared memory is the shmid, not the key. Or am I wrong? If what we need is the shmid, what is the point in not just creating a random key every time? Edit @link text one can read: What about this key nonsense? How do we create one? Well, since the type key_t is actually just a long, you can use any number you want. But what if you hard-code the number and some other unrelated program hardcodes the same number but wants another queue? The solution is to use the ftok() function which generates a key from two arguments. Reading this, it gives me the impression that what one needs to attach to a shared-memory block is the key. But this isn't true, is it? Thanks

    Read the article

  • [Processing/Java]Visibility/Layering Issue

    - by nnash
    I'm working on a small sketch in processing where I am making a "clock" using the time functions and drawing ellipses across the canvas based on milliseconds, seconds and minutes. I'm using a for loop to draw all of the ellipses and each for loop is inside its own method. I'm calling each of these methods in the draw function. However for some reason only the first method that is called is being drawn, when ideally I would like to have them all being visibly rendered. //setup program void setup() { size(800, 600); frameRate(30); background(#eeeeee); smooth(); } void draw(){ milliParticles(); secParticles(); minParticles(); } //time based particles void milliParticles(){ for(int i = int(millis()); i >= 0; i++) { ellipse(random(800), random(600), 5, 5 ); fill(255); } } void secParticles() { for(int i = int(second()); i >= 0; i++) { fill(0); ellipse(random(800), random(600), 10, 10 ); background(#eeeeee); } } void minParticles(){ for(int i = int(minute()); i >= 0; i++) { fill(50); ellipse(random(800), random(600), 20, 20 ); } }

    Read the article

  • Logrotate not doing any rotation

    - by blizz
    I just set up LogRotate on my RHEL6 server so that it rotates my custom Apache log files. However, it doesn't do anything when i try manually running it. I expect it to rotate the log files "access.log" and "err.log". They have been there for a few days and need to be rotated. Here is the output: [root@pc1 httpd]# logrotate -d -f /etc/logrotate.d/apache reading config file /etc/logrotate.d/apache reading config info for /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log Handling 1 logs rotating pattern: /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log forced from command line (no old logs will be kept) empty log files are rotated, old logs are removed considering log /var/log/httpd/access_log log needs rotating considering log /var/log/httpd/error_log log needs rotating considering log /var/www/html/NSLogs/access.log log needs rotating considering log /var/www/html/NSErrorLogs/err.log log needs rotating rotating log /var/log/httpd/access_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/access_log to /var/log/httpd/access_log-20131023 disposeName will be /var/log/httpd/access_log-20131023.gz running postrotate script running script with arg /var/log/httpd/access_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/access_log-20131023.gz error: error opening /var/log/httpd/access_log-20131023.gz: No such file or directory rotating log /var/log/httpd/error_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/error_log to /var/log/httpd/error_log-20131023 disposeName will be /var/log/httpd/error_log-20131023.gz running postrotate script running script with arg /var/log/httpd/error_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/error_log-20131023.gz error: error opening /var/log/httpd/error_log-20131023.gz: No such file or directory rotating log /var/www/html/NSLogs/access.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSLogs/access.log to /var/www/html/NSLogs/access.log-20131023 disposeName will be /var/www/html/NSLogs/access.log-20131023.gz running postrotate script running script with arg /var/www/html/NSLogs/access.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSLogs/access.log-20131023.gz error: error opening /var/www/html/NSLogs/access.log-20131023.gz: No such file or directory rotating log /var/www/html/NSErrorLogs/err.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSErrorLogs/err.log to /var/www/html/NSErrorLogs/err.log-20131023 disposeName will be /var/www/html/NSErrorLogs/err.log-20131023.gz running postrotate script running script with arg /var/www/html/NSErrorLogs/err.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSErrorLogs/err.log-20131023.gz error: error opening /var/www/html/NSErrorLogs/err.log-20131023.gz: No such file or directory

    Read the article

  • TFS: cannot build my app, CS0042: Unexpected error creating debug information file. Access is denied.

    - by zeta
    I am trying to deploy my MVC app, but in the TFS build I get this error message; CSC : fatal error CS0042: Unexpected error creating debug information file 'c:\Builds\2\STAS\STAS\Sources\Documents and Settings\jyothisrinivasa\My Documents\Visual Studio 2010\Projects\STAS\STAS\obj\Debug\STAS.PDB' -- 'c:\Builds\2\STAS\STAS\Sources\Documents and Settings\jyothisrinivasa\My Documents\Visual Studio 2010\Projects\STAS\STAS\obj\Debug\STAS.pdb: Access is denied. I have excluded the Debug directory from my application, so why am I getting this?

    Read the article

  • "Unsafe JavaScript attempt to access frame with URL..." error when developing for Facebook Open Grap

    - by Neil Sarkar
    Chrome (or any other webkit browser) throws a ton of these "Unsafe JavaScript attempt to access frame with URL..." when working with the Facebook Open Graph API. It doesn't interfere with actual operation, but it does make the javascript console basically unusable. I'd like to know if there is a way to suppress warnings in the console? Or if there are other solutions you guys can think of, I would really appreciate it. Thanks.

    Read the article

  • phpMyAdmin 3.2.4 error #1227 - Access denied; you need the SUPER privilege for this operation

    - by CheeseConQueso
    I want to make a simple sql trigger on inserts... something like this CREATE TRIGGER filter AFTER INSERT ON table FOR EACH ROW DELETE FROM table WHERE field LIKE "%chicken%" OR field LIKE "%egg%"; After running it through phpMyAdmin 3.2.4 & MySQL 5.0.45 you get an error that says #1227 - Access denied; you need the SUPER privilege for this operation Is this syntax wrong or do I have to change something else to get past this error?

    Read the article

  • can I access a struct inside of a struct without using the dot operator?

    - by yan bellavance
    I have 2 structures that have 90% of their fields the same. I want to group those fields in a structure but I do not want to use the dot operator to access them. The reason is I already coded with the first structure and have just created the second one. before: struct{ int a; int b; int c; object1 name; } str1; struct{ int a; int b; int c; object2 name; } str2; now I would create a third struct: struct{ int a; int b; int c; } str3; and would change the str1 and atr2 to this: struct{ str3 str; object1 name; } str1; struct { str3 str; object2 name; } str2; Finally I would like to be able to access a,b and c by doing: str1 myStruct; myStruct.a; myStruct.b; myStruct.c; and not: myStruct.str.a; myStruct.str.b; myStruct.str.c; Is there a way to do such a thing. The reason for doing this is I want keep the integrety of the data if chnges to the struct were to occur and to not repeat myself and not have to change my existing code and not have fields nested too deeply. RESOLVED: thx for all your answers. The final way of doing it so that I could use auto-completion also was the following: struct str11 { int a; int b; int c; }; typedef struct str22 : public str11 { QString name; }hi;

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >