My parallel notebook
more ./2019spring/Heikki/MyRand.m function a = MyRand(A) tic parfor i = 1:200 a(i) = max(abs(eig(rand(A)))); end toc %{ ans = Pool with properties: Connected: true NumWorkers: 1 Cluster: local AttachedFiles: {} AutoAddClientPath: true IdleTimeout: 30 minutes (30 minutes remaining) SpmdEnabled: truetype pctdemo_aux_gop_maxloc function [val, loc] = pctdemo_aux_gop_maxloc(inval) %PCTDEMO_AUX_GOP_MAXLOC Find maximum value of a variant and its labindex. % [val, loc] = pctdemo_aux_gop_maxloc(inval) returns to val the maximum value % of inval across all the labs. The labindex where this maximum value % resides is returned to loc. % Copyright 2007 The MathWorks, Inc. out = gop(@iMaxLoc, {inval, labindex*ones(size(inval))}); val = out{1}; loc = out{2}; end function out = iMaxLoc(in1, in2) % Calculate the max values and their locations. Return them as a cell array. in1Largest = (in1{1} >= in2{1}); maxVal = in1{1}; maxVal(~in1Largest) = in2{1}(~in1Largest); maxLoc = in1{2}; maxLoc(~in1Largest) = in2{2}(~in1Largest); out = {maxVal, maxLoc}; endFinding Locations of Min and Max We need to do just a little bit of programming to find the labindex corresponding to where the element-by-element maximum of x across the labs occurs. We can do this in just a few lines of code:Optimization methods
- Steepest decent, conjugate gradient
- (quasi)Newton
- Inner point
- Genetic algo
- Simulated annealing
Simulated annealing
- https://se.mathworks.com/help/gads/simulated-annealing-examples.html
min f(x) = (4 - 2.1*x1^2 + x1^4/3)*x1^2 + x1*x2 + (-4 + 4*x2^2)*x2^2;
This function is known as "cam," as described in L.C.W. Dixon and G.P. Szego [1].
function y = simple_objective(x) %SIMPLE_OBJECTIVE Objective function for PATTERNSEARCH solver x1 = x(1); x2 = x(2); y = (4-2.1.*x1.^2+x1.^4./3).*x1.^2+x1.*x2+(-4+4.*x2.^2).*x2.^2;All Global Optimization Toolbox solvers assume that the objective has one input x, where x has as many elements as the number of variables in the problem. The objective function computes the scalar value of the objective function and returns it in its single output argument y.ObjectiveFunction = @simple_objective; x0 = [0.5 0.5]; % Starting point rng default % For reproducibility [x,fval,exitFlag,output] = simulannealbnd(ObjectiveFunction,x0) x=.. fval=.. exitFlag=.. output = struct with fields: iterations: 2948 funccount: 2971 message: 'Optimization terminated: change in best function value less than options.FunctionTolerance.' rngstate: [1x1 struct] problemtype: 'unconstrained' temperature: [2x1 double] totaltime: 3.8543Passing extra parameters:a = 4; b = 2.1; c = 4; % Define constant values ObjectiveFunction = @(x) parameterized_objective(x,a,b,c); x0 = [0.5 0.5]; [x,fval] = simulannealbnd(ObjectiveFunction,x0)- Minimize Function with Many Local Minima plotobjective
- Global opt user's guide
- http://www.theprojectspot.com/tutorial-post/simulated-annealing-algorithm-for-beginners/
Finding an optimal solution for certain optimisation problems can be an incredibly difficult task, often practically impossible. This is because when a problem gets sufficiently large we need to search through an enormous number of possible solutions to find the optimal one. Even with modern computing power there are still often too many possible solutions to consider. In this case because we can't realistically expect to find the optimal one within a sensible length of time, we have to settle for something that's close enough.
If you're not familiar with the traveling salesman problem it might be worth taking a look at my previous tutorial before continuing.
Travelling salesman probl. (TSP)
Exe-sols, optim. 2018kevat
https://se.mathworks.com/help/releases/R2018b/pdf_doc/optim/optim_tb.pdf>> help GlobalOptimSolution GlobalOptimSolution Global Optimization solution class. GlobalOptimSolution is a class that encapsulates a solution from a call to an Optimization Toolbox solver for use with the Global Optimization Toolbox. The GlobalOptimSolution class has the following read-only properties: GlobalOptimSolution properties: X - Minimum of the objective function Fval - Value of the objective function at X Exitflag - A flag that describes the exit condition of the Optimization Toolbox solver Output - Structure describing the final state of the Optimization Toolbox solver X0 - Cell array of start points that lead to the minimum within the tolerances specified in the global solver. See also fmincon, fminunc, lsqcurvefit, lsqnonlin Reference page for GlobalOptimSolutionGlobalSearchTypical workflow to run the GlobalSearch solver: ============================================== 1. Set up the PROBLEM structure PROBLEM = createOptimProblem('fmincon','objective',...) 2. Construct the GlobalSearch solver GS = GlobalSearch 3. Run the solver run(GS,PROBLEM) Example: Run global search on the optimization problem minimize peaks(x, y); subject to (x+3)^2 + (y+3)^2 <= 36, -3 <= x <= 3 and -3 <= y <= 3. Specify the first constraint in a MATLAB file function such as function [c,ceq] = mycon(x) c = (x(1)+3)^2 + (x(2)+3)^2 - 36; ceq = []; Implement the typical workflow problem = createOptimProblem('fmincon','objective', ... @(x) peaks(x(1), x(2)), 'x0', [1 2], 'lb', [-3 -3], ... 'ub', [3 3], 'nonlcon', @mycon) gs = GlobalSearch [x, f] = run(gs, problem) See also MultiStartMultistarthelp Tutorial for the opt toolboxhttps://se.mathworks.com/help/optim/ug/parallel-computing-in-optimization-toolbox-functions.html
https://www.mathworks.com/examples/optimization
http://math.aalto.fi/opetus/MatOhjelmistot/2018kevat/Heikki/Lecture4/html/minmax2dsolver2.html
file:///home/heikki/Dropbox/Public/Tietokoneharjoitukset11/MatOhjelmistot/2018kevat/Heikki/TipsForAccelerating_Altman_Conclusions.html Contents here, GOOD FOR FINAL CONCLUSions (mention also MEX).
MATLAB JIT technology continues to improve with each release, providing better performance without requiring user effort. But there will always be room for human insight to optimize performance, and we should not neglect this. Perhaps it's a matter of setting our expectations: MATLAB solves our equations and models, but we know we must invest time to properly set up the problem first. The MATLAB engine can do a lot to speed up our code, but it can do much better when we spend some time optimizing our code first.
Luckily, there are numerous ways in which we can improve MATLAB performance. In fact, there are so many ways to achieve our performance goals that we can take a pick based on aesthetic preferences and subjective experience: Some people use vectorization, others like parallelization, some others prefer to invest in smarter algorithms or faster hardware, others trade memory for performance or latency for throughput, still others display a GUI that just provides a faster impression with dynamic feedback. Even if one technique fails or is inapplicable, there are many other alternatives to try. Just use the profiler and some common sense, and you are halfway there. Good luck! su 24.3.19
...2019spring/Heikki/instructions18.html includes:Triton userid Login: scip Password: M...Lab...2018 (send email if (...) forgotten) Valid till: 2018-03-16 slogin -X -lscip triton.aalto.fi $ cd $WRKDIR $ mkdir USER_OWN_DIR $ module load matlab $ matlab&
- my_paralleldemo_quadpi_mpi.m in .../2018kevat/Heikki
- TestMyiMaxLoc.m
- Use distr. arrays to solve lin equ
- https://se.mathworks.com/help/parallel-computing/examples/Use-Distributed-Arrays-to-Solve-Systems-of-Linear-Equations-with-Direct-Methods.html
Find Automatic Parallel Support
Here
To browse supported functions by product, click the Functions tab, select a product, and select the check box Automatic Parallel Support. If you select a product that does not have functions with automatic parallel support, then the Automatic Parallel Support filter is not available.help function nonlin optim fmincon, serial or parallel The following Optimization Toolbox™ solvers can automatically distribute the numerical estimation of gradients of objective functions and nonlinear constraint functions to multiple processors:
fmincon fminunc fgoalattain fminimax fsolve lsqcurvefit lsqnonlinManual: distcom.pdf
3. spmd
When to use spmd
he spmd statement lets you define a block of code to run simultaneously on multiple workers. Variables assigned inside the spmd statement on the workers allow direct access to their values from the client by reference via Composite objects.The “multiple data” aspect means that even though the spmd statement runs identical code on all workers, each worker can have different, unique data for that code. So multiple data sets can be accommodated by multiple workers.
spmd (3) labdata = load(['datafile_' num2str(labindex) '.ascii']) result = MyFunction(labdata) endFor example, use a codistributed array in an spmd statement:spmd RR = rand(30, codistributor()) endMore on p. 5-5Path
When the workers are running on a different platform than the client, use the function pctRunOnAll to properly set the MATLAB path on all workers. +How about cluster?Limitations 3-6
Access Worker Variables with Composites
+composite 2 ways to create:
- Composite-function on Client.
- Def variables on workers inside an spmd statemen.
After the spmd statement, those data values are accessible on the client as Composites.Composite objects resemble cell arrays, and behave similarly.spmd % Uses all 3 workers MM = magic(labindex+2); % MM is a variable on each worker end MM{1} % In the client, MM is a Composite with one element per worker. MM{2}Data transfer from workers to client
Data transfers from worker to client when you explicitly assign a variable in the client workspace using a Composite element:M = MM{1} % Transfer data from worker 1 to variable M on the client.My own remarks
>> M1=MM{1} M1 = 8 1 6 3 5 7 4 9 2 >> M2=MM{2} M2 = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 >> Mrow=cell(1,2) Mrow = 1×2 cell array {0×0 double} {0×0 double} >> Mrow=MM(1:end) Mrow = 1×2 cell array {3×3 double} {4×4 double} >> Mcol=MM(:) Mcol = 2×1 cell array {3×3 double} {4×4 double} >> delete(gcp) Parallel pool using the 'local' profile is shutting down. >> whos Name Size Bytes Class Attributes M1 3x3 72 double M2 4x4 128 double MM 1x2 281 Composite Mcol 2x1 424 cell Mrow 1x2 424 cell RR 30x30 317 distributed ans 3x3 72 double >> Mrow Mrow = 1×2 cell array {3×3 double} {4×4 double} >> Mcol Mcol = 2×1 cell array {3×3 double} {4×4 double} >> MM MM = Invalid Composite (the parallel pool in use has been closed).
Non-succesful attempts:M=[MM{:}] >> M=cell(1,2) M = 1×2 cell array {0×0 double} {0×0 double} >> M{1:2}=MM{1:2} Expected one output from a curly brace or dot indexing expression, but there were 2 results. >> M(1:2)=MM(1:2) M = 1×2 cell array {3×3 double} {4×4 double}>> Mrow=MM(1:end) Mrow = 1×2 cell array {3×3 double} {4×4 double} >> Mcol=MM(:) Mcol = 2×1 cell array {3×3 double} {4×4 double} >> Mcol{1} >> delete(gcp) Parallel pool using the 'local' profile is shutting down. >> whos Name Size Bytes Class Attributes M1 3x3 72 double M2 4x4 128 double MM 1x2 281 Composite RR 30x30 317 distributed ans 3x3 72 double >> MM MM = Invalid Composite (the parallel pool in use has been closed).
3-10 Variable Persistence and Sequences of spmd
The values stored on the workers are retained between spmd statements. This allows you to use multiple spmd statements in sequence, and continue to use the same variables defined in previous spmd blocks.